13th IEEE Integrated STEM Education Conference — 9 AM - 5 PM EDT, Saturday, March 11

Onsite Venue - Kossiakoff Center - 11100 Johns Hopkins Road, Laurel, Maryland

K-12 Poster Abstracts

Session Poster

Poster Session 1

Conference
9:00 AM — 4:50 PM EST
Local
Mar 11 Sat, 6:00 AM — 1:50 PM PST

Drone-Aided Sensor Networks for Soil Contamination Monitoring

Lizbeth He (USA)

4
On my drive to school each morning, I pass by acres and acres of local farms. My mom often buys fresh produce from these farms as well. However, I see them spraying pesticides and fertilizers from machinery during each growing season. The excessive usage of Nitrogen-based fertilizers and chemical pesticides concerns me, as toxic effects can contaminate the soil and reside directly in our agricultural products. Soil contamination has negatively impacted the growth of crops and harmed the health of consumers for centuries. A study conducted in 2015 concluded that pollution was responsible for 268 million disability-adjusted life years (DALYs). The majority of these were caused by soil pollution alone. Additionally, a recent study concluded that 64% of global agricultural land (or about 24.5 million km2) is in danger of pesticide pollution.

With this newfound research, it is critical to accurately monitor soil contamination in farmlands before implementing any pollution treatments. This research project applies recent advancements in chemical sensors and computer networks. It implements drone-aided sensor networks to tackle this issue. A group of sensors can be deployed as a sensor network to cover a particular area. Data collected by each sensor are transmitted to a central node for storage, analysis, and further processing.

The proposed method includes two types of components in the network: chemical sensors and drones. The drone first deploys the available chemical sensors into farmland in a formation to maximize the coverage. The sensors will detect factors such as but not limited to the amount of heavy metals, petroleum hydrocarbons, and polychlorinated biphenyls (PCB). The drone then functions as the central node to collect data from each sensor later on. After data collection for a pre-scheduled duration, the drone will fly out to receive the data from each sensor. Scientists and agricultural professionals can use the collected data for analysis.

To evaluate performance of the drone-aided sensor network, a mathematical model is further proposed. It aims to use the least amount of resources while still providing enough monitoring readouts. Parameters in this model include sensor amount, sensor monitoring range, sensor battery life, detection power of the drone, distance of the drone flying above the ground, monitoring interval, as well as the required readout amount. They describe different characteristics of the sensor network. Typical values from sample cases are implemented to evaluate scenarios. Results show this proposal offers feasible guidelines of conducting soil contamination monitoring.
Speaker
Speaker biography is not available.

Detection of Lycorma delicatula using Thermal Imagery and UAVs

Joseph E Miller (PRISMS, USA)

0
The Spotted Lanternfly, Lycorma delicatula, is an invasive insect species that is causing damage throughout the Mid-Atlantic region. Large quantities of lanternflies have recently been infesting states including Pennsylvania, New Jersey, New York, and Connecticut. They harm native wildlife and crops by feeding on them, blocking sunlight, and excreting honeydew.

Spotted Lanternfly often live on the trunks of trees and it has been difficult to estimate the complete population distribution through conventional means because of inaccessible locations and high elevations. This project describes a method to detect Spotted Lanternfly populations by using an Unmanned Aerial Vehicle equipped with a FLIR Vue Pro thermal camera to record heat signatures. Likely as a result of the Spotted Lanternfly's natural metabolism, body heat of adult Spotted Lanternfly can be detected using thermal infrared imagery if the difference between the lanternfly and background is significant enough, which occurs at an ambient temperature of approximately 50°F. The resulting footage is processed using object detection machine learning to provide an accurate estimation of colony size, location, and spread. The data produced by this study will be used to create a density map of the Spotted Lanternflies over a given area.

This will provide an easy way to count Spotted Lanternfly for observational studies and future research.
Speaker
Speaker biography is not available.

A Biomedical Device for Separating Fluids from Tissues - FluidXtractor

Arthur Yang (Marriotts Ridge High School, USA); Feng Ouyang (Johns Hopkins University / Applied Physics Lab, USA)

1
Biopsies are commonly used for molecule study in the laboratory. However, because tissue fluids, which are taken with the tissue specimen, flow between cells and blood capillaries in a tissue, are similar to plasma, it is necessary to separate tissue fluid from the tissue so that they can be studied respectively. Currently, there are no efficient tissue isolation methods that can separate blood or tissue fluid from the tissue itself during molecule characterization. In this study, a biomedical device was designed that can separate tissue fluid or blood from the tissue using a vacuum-assisted filtration method. The device has two chambers, one chamber for collecting the extracted tissue fluids and the other chamber for holding the remaining tissue. Two different devices were produced, one with a lower-power vacuum pump while the other with a higher-power vacuum pump. The devices were tested on multiple tissue samples purchased at a grocery store, including pork liver, pork tenderloin, and beef sirloin. Each sample was tested in set conditions, where samples were tested for the same amount of time in the same place. The samples were also weighed before and after each test. Afterwards, samples were frozen and sent for glycan analysis. Tissues were then characterized by their glycan profiles before and after vacuum-assisted filtration. The results demonstrated that the biomedical device could remove tissue fluids and facilitate the analysis of tissue-specific molecules while minimizing tissue fluid contamination. The results also showed that the power of the vacuum pump did not significantly affect tissue fluid removal, with a difference of less than 5% between samples using a stronger vacuum pump and those using a weaker one.
Speaker
Speaker biography is not available.

SOS.net: A Robust System Harnessing the Power of AI to Expedite Search and Rescue Missions

Nesara Shree (Portland State University, USA)

3
According to the National Missing and Unidentified Persons System, over 600,000 people go missing around the US wilderness every year, and there are at least 1,600 people currently missing- these only being the ones that were officially reported.

Current, drone-and-human-vision dependent systems in place are not only incredibly inefficient, but also tiresome for drone pilot operators, who carry out over 60 Search and Rescue (SAR) missions a year. Alternatively, thermal detection drones used are inaccurate and far too generalizing, picking up on unrelated, inanimate objects that radiate heat. This is where I saw Artificial Intelligence (AI), Machine Learning (ML), and their powerful Computer Vision (CV) capabilities coming into play. What is needed is a reliable system that can accurately locate and signal by recognizing visual indicators of human presence or distress, and AI's application is a crucial first step in being able to expedite SAR missions, relieving the strain on our SAR teams, and saving lives. The goal is to make human search missions much more refined and efficient by implementing RCNN's Resnet50 ML model methodologies. By open-sourcing SOS.net, meaning that all of the code, procedures, a snapshot of the trained model, and the option to access the entire dataset is available to the general public on Github to download and/or contribute towards its further improvement, enables SOS.net to be a dynamic, yet robust, AI system that has high potential for actual implementation and continued refinement as a tool.
Speaker
Speaker biography is not available.

Design Calculations of a Biochair for Patients Requiring Leg Rehabilitation

Pranav R Bellannagari (IntelliScience Institute & San Jose State University, USA)

1
The biochair is a rehabilitation chair designed specifically for patients with leg injuries or disabilities. The chair is equipped with two flexible leg-holding support plates, which are activated by a control panel to move the legs up and down in an automated fashion to conduct rehabilitation exercises. The control system employs an Arduino microcontroller processor, as well as motor drivers, which connect to stepper motors that enable the system to operate. The control system requires a software program that dictates the required movements of the leg supporting plates. The design of the biochair also includes the conversion of the motors' rotary motion into the translational motion of the linear actuator, which was mounted on a ballscrew and was connected to the leg resting plate giving it the required angular displacement. Two Nema-23 H2045 motors (1.8 degrees/step) were used in this project. Assuming the human leg with a mass of about 7.0 kg, the required torque was calculated for the ballscrew (diameter = 16 mm and Pitch/Lead Distance= 3 mm) by estimating the Mechanical Advantage (~25.15 N) and the required force of effect (~25.15 N). Calculations showed that a torque of about 0.2 Nm will be required for moving the mass along the ball screw. The step rate was calculated at around 100 steps per second which were within the pull-in error rate with the required torque. Rotary sensors were attached to the chair's moving support plates to give out angular readings for the leg position during the exercise. The calculations for the produced force as a function of the angular displacement of the leg supporting plates were performed. The maximum force (~1000 N) was calculated around 39 degrees from the vertical position of the plates. The leg rest plates' velocities were also calculated as a function of the linear actuator's linear velocity. A linear relationship was found. In this system, two mechanisms for activating the system have been used. In the first, a push button was used to initiate the central control system to activate the chair's leg supporting plate movements. The other option included the use of EMG sensors (mounted on the subject's leg muscles) signals that were provided to the central control system to activate the operation. The EMG signals, in general, are noisy and need some processing to increase the signal-to-noise ratio and also need amplification before they can be used to activate the system. The work presented in this poster will include detailed signal processing for the EMG sensors. Bio chair performance testing is under progress and new results will be added in the final presentation. Overall, the biochair provides a cutting-edge solution for patients undergoing leg rehabilitation and especially for patients who need extra assistance to move their legs for various rehab exercises. A disclosure has been submitted on the chair design and a patent application is under review.
Speaker
Speaker biography is not available.

Design and Testing of A Multifaceted DBD Plasma Torch

Karthik Hari (Santa Teresa High School & San Jose State University, USA); Krishnaveni Parvataneni (BASIS Independent Silicon Valley, USA)

1
Low-temperature plasma generated by dielectric barrier discharge (DBD) is used for fast wound healing and sterilization. The temperature and radicals of the plasma play an important role in the healing process. Recent research efforts at the San Jose State University (SJSU) are focussing on various DBD torch designs and their characterization. In earlier designs, the plasma was created in a single thin dielectric tube where the plasma properties could not be adjusted without altering the input power or gas flow rates. It was not convenient during the operation of the torch. In a revised design, a variable outer electrode was used to control the plasma jet characteristics [S.H. Zaidi, US Patent US 9,433,071 B2]. This design was further improved at SJSU by employing a multi-electrode plasma torch, which utilized multiple fixed outer electrodes that eliminated the variable electrode. It was shown that the choice of the outer electrode can impact the plasma radicals and the plasma rotational, vibrational, and electronic temperatures. The control of the plasma characteristics is important to find optimum plasma operating conditions that can be used to mitigate bacteria and enhance the wound healing process. In the current research, a single jet plasma torch is further improved by a new design where a multifacet plasma torch with the option of multiple plasma jets ejecting simultaneously from the plasma torch. Due to its smaller size (~ 2-3 mm diameter), a single jet plasma torch may take a long time to scan the large wound surfaces. This problem is solved by designing a multifacet plasma torch that, due to its four plasma jets, can cover larger wound areas in a shorter period of time. The current design is being operated at 5-10 kV (20-40 kHz) with argon as the working gas (10-20 slpm). The plasma jets are about 3 -5 cm long. The plasma gas temperatures were measured along the plasma jet showing a significant change in the gas temperatures. The highest gas temperatures (~50 C) were found near the jet exit whereas the temperatures at the plasma jet tips were about 30-35 degrees Celsius. An OceanOptics UV-IR Optical fiber-based spectrometer was used to capture the spectral features of the emitted light from the plasma. In the spectrum, various emission lines for different radicals were identified. SpecAir software was used to estimate the plasma vibrational, rotational, and electronic temperatures at various plasma operating conditions. The poster will include full details on the design of the multifacet torch along with its characterization results that will summarize the impact of various plasma operating conditions on the plasma radicals and plasma temperatures. It is anticipated that the understanding of both plasma radicals and plasma temperatures will help to understand the wound healing and wound sterilization process in an effective manner.
Speaker
Speaker biography is not available.

Impact of Training/Testing data ratio on ML Model Accuracy in predicting Cardiac Patient's Mortality

Siddhartha Shibi (Washington High School & Intelliscience Training Institute, USA); Vaishali Jha (Evergreen Valley High School, USA)

4
Coronary Artery Disease (CAD), a common heart disease in the US, is the inability for a coronary artery to function, making it unable to pump blood, nutrients, and oxygen to the heart. It may be due to smoking, high blood pressure, high cholesterol, lack of regular exercise, diabetes, and thrombosis. A beforehand prediction of the disease outcome can save thousands of lives. The main objective of this research is to analyze the CAD patient data and develop statistical AI machine learning models to predict in-hospital outcomes (Discharge, DAMA/Discharge Against Medical Advice, Mortality) for such patients. For this purpose, the data released from a hospital (Punjab, India) was collected from Kaggle (open-source data collection) and was analyzed to develop a model that could predict patient outcomes. The data featured 6500+ patients with 52 recorded variables including Blood Pressure, Diabetes, Age, etc. For this project it was decided to use IBM's Automated AI services (Watson Studio). Various Machine Learning models were developed by using different algorithms including Snap Random Forest. The accuracy of the supervised models was further enhanced by incorporating various feature engineering techniques by Hyperparameter Optimization. Feature engineering (FE) manipulates raw data into different features that are used for supervised learning. It optimizes the transformations of data and the model's accuracy. Both HPO-1 and HPO-2 (Hyperparameter Optimization) evaluate the effectiveness of the presented accuracy through optimizations. In our modeling, the training data, subset of the original data, is used to train the machine learning model, whereas the rest of the data, the testing dataset, is used for evaluating the accuracy of the model. To see the impact of the split ratio on model accuracy, we have modulated three training datasets, with 90% training data and 10% testing data, 80% training data and 20% testing data, and 70% training data and 30% testing data respectively. In this case the results from SNAP Random Forest Algorithm are being presented. Results from three models are compared for accuracy and selection of various top contributing features for the model predictions. For instance, the accuracy of the model reduced from 88.7% to 87.3% as the data ratio was changed from 80-20 to 70-30 respectively. For 90-10 data, urea (100%), glucose (89%), eject_fraction (77%), leuk_count (68%) and creatinine (51%) were changed to platelets (100%), leuk_count (77%), bnp (72%), glucose (66%), and creatinine (54%). Our research shows that data splitting ratio (Train-Test) does have an impact on the selection of various features that play a prominent role in predicting the outcome of the model. The variation in the relative percentage importance of these features in predicting the outcome of the model cannot be ignored and needs more research to understand its full impact on the ML model accuracy. This poster will include discussion on various algorithms employed in this research along with the split ratio impact data on the outcome of the model.
Speaker
Speaker biography is not available.

MATLAB Based Meta-Analysis Code Providing Common Perspective by Synthesizing Data from Various Sources

Himani Jha (Intelliscience Research Institute, USA); Rina M Weaver (Intelliscience Institute & San Jose State University, USA); Ambika Palleti (Evergreen Valley High School, USA)

0
Due to unparalleled development in IT sector/computer technology, capability of generating medical data has grown exponentially. Now we require efficient tools for data analysis that may provide insight on many emerging health problems. This becomes particularly relevant when data on a particular medical issue is generated by multiple research groups. The tool that can analyze randomized, control trials data in an efficient manner comes from Meta Analysis. Meta analysis has the capability to analyze data emerging from various places of the world from a common perspective so that overall conclusions can be drawn successfully. Meta analysis synthesis can be performed by incorporating both random and fixed effect models. We have developed open-source meta-analysis code that uses these models and analyze data that may be on continuous, binary, and correlation data that is borrowed from literature to evaluate our models. For conducting meta-analysis the source book used for the fixed effect and random effect models was from Borenstein et al. [2007]. The example data presented in Borenstein's book was used to develop our MATLAB models. For the continuous data, the fixed effect model, used the mean, standard deviation, and sample size to estimate the bias-corrected standardized mean difference (Hedges') that was then used as the effect size measure. The summary size effect was then calculated to predict the confidence interval, Z value and p-values along with the variance of the true standardized mean differences. In order to incorporate Fixed Effects model for the given binary data, the analysis starts with events and non-events in two independent groups under investigation and then uses either Odds Ratio or Risk Ratio as the effect size measure. This part of the meta-analysis code was first tested on the example data described by Bornstein's work and then was used on the targeted data collected for this study. It was found that our MATLAB-based code was robust enough to generate accurate results.
In the second part of the code, the concept of heterogeneity was addressed by identifying and quantifying it in effect sizes through the MATLAB models. As the observed variation in the estimated effect sizes include true variation and random error, there is a need to isolate true variance and then use it to create to identify various perspectives on the dispersion. To achieve this goal, the MATLAB model was further developed that determined the Q statistics (a measure of weighted square deviations), the results of a statistical based on Q (i.e., P), the between-studies variance (T2), the between-studies standard deviation (T), and the ratio of the true heterogeneity to total observed variation (I2). This analysis provides evidence of heterogeneity in the true effect size. The newly developed MATLAB code is open source and will be available for any user to process the data. Our poster will include details on the models adopted in the MATLAB code along with the examples results that were obtained by executing the code for the target data borrowed from literature.
Speaker
Speaker biography is not available.

Automating Conventional Intravenous Stands for Easier Hospital Infusion

Yihan Chen (PRISMS (School), USA)

0
Infusion errors have always been a tremendous danger that directly or indirectly resulted in over 50 percent of hospital accidents. However, since infusion looks minor, people rarely pay attention to solving this problem. As a kid, I often received infusions, so I deeply understand that there are many places for improvement in the infusion department.
The automated version of intravenous stands aims for a better infusion experience for both patients and medical staff. Furthermore, it should be easily adaptable to current hospitals.
I am trying to create a clip-on device for current intravenous stands. This device should enable the intravenous stands to free the patients' hands and help with their mobility to some extent, monitor the medicinal fluids, infusion frequency, and patients' status, and share all data of patients with medical staff.
The project is still in progress.
Speaker
Speaker biography is not available.

Building a Standing Mobility Device to Help Handicapped People

Yihan Chen (PRISMS (School), USA)

3
The current world is paying more and more attention to human equality, including creating a more friendly environment for disabled people. Convenient devices appear in the public now and then. However, people rarely focus on improving the mobility and convenience of the disabled themselves. In this paper, a Standing Mobility Device is presented.
The Standing Mobility Device mainly focuses on helping people with lower body disabilities. As suggested in its name, this device enables people to get up and go around on the same plane.
The goal of the Standing Mobility Device is to make disabled people in wheelchairs able to move around more freely and let them have a more independent life. The whole device looks like a small-sized car that people can sit on. In this case, a pushing device is used to push the person on the device into a standing position, while wheels are attached at the bottom to move both the person and the device as a whole around the flat plane. As for the pushing device, we mainly used an electric putter for the main force input. Arduino is used to controlling the two functions, and buttons and a rocker are designed to let the person control the device. The overall structure of the Standing Mobility Device is built with Aluminum bars and Aluminum plates. A few testing experiments done on the device prove its feasibility.
Speaker
Speaker biography is not available.

Using human body tracking technology to analyze the double axel in figure skating

Wanyun Qu (HIgh School, USA)

2
Figure skating is becoming a very popular sport, and I'm a figure skater myself. But double axel (a jump) got in my way. I've been dealing with double axel for years. So I want to do research on the double axel, and use human body tracking technology to analyze the maximum parameter when doing double axel. Then through this research, the success rate may be increased. I would like to explore more about double axel using human body tracking technology. By measuring the height, speed, and other essential elements in jumps, I want to know how to increase the success rate and reduce injury. Furthermore, if possible, I want to improve/design a jump harness (the equipment that the assistant jumps on the ice) according to the result I found.
Speaker
Speaker biography is not available.

Instrumentation and Control of a Fluidic Muscle-Based Exoskeleton Device for Leg Rehabilitation

Rishit Agrawal (Evergreen Valley High School & IntelliScience Training Institute and San Jose State University, USA); Sahana Chowlur (Silver Creek High School, USA & IntelliScience Institute, San Jose State University, USA)

2
This STEM-based project was initiated to design and develop an exoskeleton device for leg muscle rehabilitation at San Jose State University. This exoskeleton device is designed for patients who have lost their leg mobility due to a fatal illness and is intended to conduct rehabilitation exercises without any external human assistance. This device consists of two movable brackets designed to be mounted on the patient's leg. The initial design incorporated two fluidic muscles on the top and two on the bottom bracket. A few limitations were identified in characterizing that device, especially on the angular displacement of the brackets. In the current project, we have further improved the device design by using the top part of the bracket that utilizes five fluidic muscles, which require an air pressure of 30-60 PSI for its operation. The fluidic muscles, pressurizing, and depressurizing, mimic the leg muscle's movement as the patient moves their leg for rehabilitation exercises. The lower part of the bracket is designed to support the leg during its motion. The central control system is based on a microcontroller (Arduino Uno) which receives input signals from EMG sensors on the patient's leg. The EMG sensors are activated by the leg muscle's movement as soon as the patient intends to move his/her legs. These noisy signals needed further processing to achieve a high signal-to-noise ratio. The processed EMG signals were further amplified to activate the central control system. For proper functionality, a control box was strategically designed to house the Arduino circuit and various other components, such as relays, solenoids, actuators, and regulators. The system implemented a relay capable of electrically controlling the current flow to the actuators responsible for determining the airflow direction. Solenoids were placed within the control box to assist the actuators. Before characterizing the operation of the device, sensors were calibrated. That included the calibration of the MyoWare EMG sensors and the Cylewet KY-040 rotary encoder that was used to measure the angular displacement and the angular velocity of the device. The calibration of the rotary encoder mounted at the joint of the two brackets of the exoskeleton was obtained by comparing the predicted angles by the software to the measured angles using a protractor. The rotary encoders were also used to estimate the rotation velocity of the exoskeleton device by measuring the time of rotation. The EMG sensor's signals were optimized for the leg's mounting locations. The work included in this presentation will highlight the various control circuits and associated design efforts to develop a control box for the operation of the exoskeleton device. In addition, sensor calibration programs and data linked with the performance characterization will be presented in greater detail.
Speaker
Speaker biography is not available.

Domestic Wind Power Apparatus

Man Kin Cheng (Bishop Hall Jubilee School & BHJS, Hong Kong); Andrew Wong (Secondary School & Bishop Hall Jubilee School, Hong Kong); Shing Chan (Secondary School & Bishop Hall Jubilee School, China); Christopher Tang (Bishop Hall Jubilee School, Hong Kong)

0
We found that problems with energy are prevalent worldwide. For instance, many European countries are facing a shortage of energy and need to face a ruthless winter. The energy crisis in Europe led to panic buying of stoves, which work by burning wood, to prevent power cuts. If it suffers from a shortage of energy, the world will be affected as Europe is the first five largest economies in the world. Hence, a domino effect occurs.
Furthermore, although there are vast non-renewable energy resources, our consumption speed of non-renewable energy is far greater than their formation, like petroleum. Therefore, we need to confront the energy crisis and develop renewable energy which is more stable.
Wind power is the largest source of renewable electricity generation in the US, providing 10.2% of the country's electricity, and growing. The advantages of wind power include occupying tiny land and having a minimal environmental impact.
We aim to provide a sustainable energy supply and reduce pollution produced by the consumption of non-renewable energy. We want to alleviate the energy crisis worldwide by creating a wind power generator.
We faced some difficulties during the making process of the device. For example, we need to lower the voltage to 5V, the common voltage for daily uses. Also, we may need to design the custom printed circuit board (PCB) because there is no suitable designed PCB for us to use in the market and we may only design it ourselves.
We first surfed the Internet on how to build a wind turbine model. Then, we started to design it with devices like Rectifiers Diodes on our own-designed PCB to stabilize the voltage, then with a capacitor to store the electricity. After that, a 5V3ADC-DC converter module is for converting all the voltages into 5V. Lastly, we test all the models we made.
When the wind speed in front of the blade is greater than the wind speed behind the blade, the blade starts to turn which generates power from the movement of the blade. Then the energy flows onto the PCB. First, energy enters rectifiers diodes to make its voltage stabilized. Then, the stabilized energy passes through the capacitor, 5V3ADC-DC converter module, and lastly the battery. We can consume energy by charging batteries and phones with a USB plug.
The product meets the intended goals. The product can convert kinetic energy into electric energy for charging. It can generate electricity by wind power. If it were widely used, it would decrease the use of non-renewable energy.
Our device cannot face strong winds. We hope we can use tougher materials to make a new stand to overcome it. In addition, if we can improve it, we may enlarge the wind turbine model and blades to have better efficiency and the ability to generate more energy.
We hope that each household could own our device and put it in their garden or balcony to generate electricity for personal use to save energy.
Speaker
Speaker biography is not available.

Muscle-Inspired Home Automation System

Andrew Yuting Lu (Oyster River Middle School, USA); Femi Olugbon (University of New Hampshire, USA)

1
Aging causes various health issues, such as muscle weakness, joint problems, and neurological (brain and nervous system) problems, which challenge seniors' mobility in their daily life. Extensive life care service often leads to a significant cost. To support an affordable smart home for senior people, we propose a home automation system that exploits muscle sensors to control smart devices. Since Internet-of-Things (IoT) technologies have been widely applied to household appliances, we can remotely manipulate those appliances via a message queueing telemetry transport (MQTT) server using existing cloud service (e.g., Amazon Web Service). In this project, we interpreted the signal from a muscle sensor into an instruction of light control and deployed MQTT clients over transmission control protocol in a pair of microcontroller units (MCUs) to create a remote-control channel. C++ was the primary language used in this project. All codes were developed and debugged in the PlatformIO development environment using the Arduino framework. This two-week research project motivated a middle-school student to pursue future study in Science, Technology, Engineering, and Mathematics (STEM) and also enabled that student to practice his prior skills in coding and sensor utilization. The project outcome is encouraging. In future work, we will further advance the remote control technology in other bio-signal-inspired home automation systems.
Speaker
Speaker biography is not available.

Immersive Experiences in the Omniverse Channel

Adrik Ray (Huber Street Elementary School, USA)

1
The Metaverse and Omniverse represent a new frontier in human experiences, with the potential to transform the way we interact with technology. By incorporating augmented and virtual realities, these new channels offer the opportunity to create truly immersive experiences that can enhance our understanding of the world around us.

One area in which this technology can be particularly transformative is leveraging weather forecasts in predicting and planning our experiences. By incorporating weather data into augmented and virtual realities, users can experience the impacts of weather in a way that is not possible with traditional methods of data delivery. This allows for a more intuitive and engaging way to understand and prepare for changing weather conditions and plan for the best possible experiences.

The possibilities for immersive experiences are not limited to weather forecasts. Other areas such as sports, education, and entertainment can all be transformed by incorporating augmented and virtual realities. This can result in a more personalized, engaging and collaborative experience for users, with the potential to revolutionize the way we learn, play, and interact with the world.

The potential impact of the Metaverse and Omniverse goes beyond the realm of individual experience. They have the potential to create new opportunities for businesses and commerce, as well as to facilitate communication and collaboration on a global scale.

While there are still many technical and ethical challenges to overcome in the development of these new channels, the potential benefits are enormous. As we continue to push the boundaries of technology and explore new frontiers in human experiences, the Metaverse and Omniverse offer the promise of a truly transformative future.

My paper focuses on combining weather forecasts and augmented reality to create an immersive experience for visualizing a location on a future day and time. Though I have used weather forecasts to create this future immersive experience, this is not limiting. This approach can be expanded to include other elements such as crowd density, surrounding environment, augmentations with avatars etc for added effects. These experiences can be rendered in Metaverse and Omniverse channels, as well as existing channels like Mobile. Such experiences can enable better decision making, planning and satisfaction.
Speaker
Speaker biography is not available.

Modulation and Noise Effects in a Free-Space Optical Communication System

Joseph M Bailor (Johns Hopkins University Applied Physics Laboratory, USA); Jeremy Chung (Johns Hopkins University Applied Physics Laboratory & Winston Churchill High School, USA); Jonathan C Moses (Mount Saint Joseph High School & Johns Hopkins University Applied Physics Laboratory, USA); Jose Martinez Lopez, Jade Sim and Jony Teklemariyam (Johns Hopkins University Applied Physics Laboratory, USA)

0
Free-space optical (FSO) communication provides an alternative to fiber optics and is a method of transmitting information that involves sending data over infrared lasers. Research into this field has shown that it has the potential to be a powerful alternative to cable-restricted fiber optics. There are some applications for which FSO outperforms both fiber optics and RF including extraterrestrial communications, solutions for telecommunications networks, and communication between vessels. However, disturbances, irregularities, and beam divergence attenuate the signal's power as it travels through the atmosphere, affecting the efficiency of data transfer. This project studies how the error rate of a signal changes as its signal-to-noise ratio (SNR) is adjusted. The process in which data is encoded onto a laser signal is known as modulation. The types of signal modulation formats used in our research included On-Off Keying (OOK), Return to Zero (RZ), and Non Return to Zero (NRZ). OOK splits an optical signal in two, and the phases of these two waves are adjusted to either constructively interfere (1) or destructively interfere (0). RZ means that the signal's value automatically resets to zero after each individual bit of data is transferred. Unlike RZ, an NRZ signal's value does not automatically reset to zero after each individual bit of data is transferred. To begin our analysis of the correlation between SNR, modulation scheme, and BER, the laser first generated a signal at 1550 nm, which was then sent to a modulator that encoded data onto the signal. The signal was then sent to a variable optical attenuator (VOA), which simulated power loss over a distance due to beam divergence by reducing the signal's SNR. To maintain a stable noise floor, a second signal was added to the laser from an erbium-doped fiber amplifier (EDFA). The attenuated signal was received by two devices: the optical spectrum analyzer (OSA) and the customized bit error rate tester (CBERT). The OSA measured the SNR of our laser frequency, and the CBERT compared the received signal with the expected signal to determine how much data was lost (bit error rate, or BER). Because the computer could not communicate with the VOA or CBERT directly, a data acquisition device (DAQ) was used as an intermediary. Information from the OSA and CBERT was then sent back to the central computer, which recorded the information through a MATLAB program, which attenuated the signal and collected the SNR readings from the OSA. From this experiment, SNR plays a key role in the efficiency of data transfer in an optical link. NRZ-OOK requires the highest SNR, and 67% RZ-OOK can preserve an optimal BER at an SNR where the other modulation schemes lose data. Assuming beam divergence is the only factor affecting FSO communications, we would expect a 67% RZ-OOK signal with an initial SNR of 50dB to surpass the maximum allowable BER of 10-9 after 4km. Forward error correction (FEC) could extend this distance, allowing a maximum BER of 10-3, but our CBERT cannot measure this high of an error.
Speaker
Speaker biography is not available.

Investigating the role of polyrhythmic music in attention-based neurological therapies using EEG Sensors

Sumanth Mahalingam (Evergreen Valley High School, USA)

4
The neurological pathways involved in listening to complex rhythms pose multiple avenues in investigating the etiology and treatment of neurobiological disorders, largely due to the interconnection of temporal, cognitive, and motor pathways involved in rhythmic processing. Multiple classifications exist for the different models of temporal rhythmic processing; for instance, the delineation between interval models (characterized by processing intervals of rhythm independently of each other) and entrainment models (characterized by processing intervals of rhythm relative to one another) of rhythmic processing is a prominent categorization useful in determining the brain's responses to complex changes in rhythm. Many of these models hypothesize the existence of neural oscillators to adapt to rhythmic patterns, thereby comparing rhythmic intervals using these oscillators and their relative temporal changes. In regards to the examination of the interrelation between this type of processing and other cognitive processes in the brain, the most prominent avenue for understanding these models is the balance between top-down (predictive) and bottom-up (reactive) processing maintained in the brain's processing of rhythm. Similar balances are maintained in numerous cognitive processes, including attention and impulse control -- hence the disruption in top-down processing commonly noted in attentional disorders such as ADHD. As such, dopaminergic disorders involved in attention and impulse control can thus be understood through similar models of rhythmic processing, as the disruption of top-down and bottom-up processing can lead to difficulties in maintaining the balance of fulfillment and violation of cognitive expectations, often constituting the basic neuropathology for attention deficits.
In this paper, the neural processing involved in polyrhythmic music was investigated as a possible therapy for attention control. Polyrhythms involve the concurrence of two different rhythms simultaneously, such as a three-beat pattern superimposed on a four-beat pattern. In theory, entrainment models involving oscillators would involve adaptation to multiple simultaneous rhythms; thus, the additional overlay of rhythms involved in polyrhythms would create complexities in the rhythm that aid in restoring the balance between dopaminergic fulfillment and violation of cognitive expectations. Using electroencephalography (EEG) to measure neural responses and activity in the frontal and parieto-temporal regions, participants in one experiment were played a continuous 4:3 polyrhythmic melody with variances in tonal patterns. As the music was played, participants were instructed to copy a passage from a book, as a means of measuring the extent of the music's effects on motor coordination and attention. In another experiment, the same participants were instructed to copy down a similar-length passage while listening to a non-polyrhythmic melody with similar minor-scale tonal patterns as the polyrhythmic melody. In the final experiment, the same participants merely copied down a similar-length passage without music, to contrast neural activity during a motor task with no musical stimulus. Power-Spectral-Density analysis of the EEG results showed comparative increases in pre-frontal beta waves and decreases in pre-frontal theta waves when listening to polyrhythmic music, indicating an increase in focus while polyrhythmic music was played. This demonstrates that polyrhythmic music may be a viable avenue in exploring the extents of neural entrainment, providing insights into attentional therapies.
Speaker
Speaker biography is not available.

Detecting a system of Binary Black Holes using the Einstein Toolkit

Agneya D Pooleery (USA)

1
Black Holes are strange, mysterious objects in space. They weigh millions to billions of solar masses (mass of the sun) and have such an immense gravitational pull that nothing -- not even light -- can escape from them. They are mainly formed from dying stars. As stars live their lives, they fuse elements in their core. Once the star's core fuses into iron, it marks the end of the star's life. Typically, the star will explode, creating a supernova. However, due to intense gravity, the core of the star may collapse on itself, forming a black hole.

Black holes can be identified by jets and swirling masses of matter around them. They have an event horizon, plasma disk and a singularity in the center. The singularity of a black hole is an infinitely small point at the center where all its mass is concentrated. If you were to go inside a black hole and touch the singularity, you would instantly become part of the black hole. A black hole's event horizon is its perilous edge. Once something crosses the black hole's event horizon, it will never return. Since light cannot escape it, one would need to travel faster than the speed of light to escape it, which is impossible. A black hole also has an accretion disk. This is a disk of plasma orbiting around the black hole. The plasma may have been part of a star. The black hole's gravity is what keeps the plasma disk in place, and it can reach a stunning temperature of over 1,000,000 degrees Celsius! Also, the plasma is slowly spiraling into the black hole, making it smaller by the second.

An interesting phenomenon that has been observed by astronomers in recent years is the merging of two black holes, often called a Binary Black Hole (BBH) system. For many years detection of BBH systems was hard because of the nature of the black holes themselves and limited detection facilities available. More recently, it has been found that when black holes spin close to one another they can emit massive amounts of energy in the form of gravitational waves. These waves are about ten trillion times smaller than human hair and are incredibly hard to detect - but, they have distinctive waveforms and can be calculated using general relativity. When a BBH system reaches very high velocities, the amplitudes of gravitational waves reach its peak allowing them to be easily detected by laser interferometers.

My project aims to study and use a software platform designed by the astrophysics community - the Einstein Toolkit - which can be used to simulate the merging of black holes and study the gravitational waves emitted from them.
Speaker
Speaker biography is not available.

An Artificial Intelligence Approach to Fetal Health Risk Prediction

Vighnesh U Nair and Devika Gopakumar (Dougherty Valley High School & IntelliScience Training Institute, USA); Krishnaveni Parvataneni (BASIS Independent Silicon Valley, USA)

2
The use of artificial intelligence (AI) in obstetrics has the potential to improve the prediction and monitoring of fetal health, which in turn could help to reduce maternal and infant mortality rates. This study uses IBM Watson, a powerful cognitive computing platform, to predict fetal health by analyzing data from cardiotocography (a recording of the fetal heart rate, based on ultrasound). The data contains information on fetal movement and accelerations as well as the mother's uterine contractions. Fetal movement and uterine contractions are two important indicators of fetal well-being. However, monitoring these factors in a traditional manner can be subjective and may not provide a complete picture of fetal health. By using AI to analyze data from these sources, we aim to identify patterns and make more accurate predictions about the health of the fetus. To evaluate the effectiveness of using IBM Watson to predict fetal health, we conducted a prospective observational study. In this study the data was borrowed from the University of Porto, which consists of 2126 pregnant women whose fetal movement and uterine contractions have been monitored throughout their pregnancy using a combination of ultrasound and tocodynamometry. IBM Watson platform was then used to analyze this data and make predictions about fetal health. The data contained about 2000 rows and 21 columns showing 20 features that include %_of_time_with_abnormal_long_term_variability, abnormal_short_term_variability, accelerations, mean_value of short_term_variability, histogram variance, media, tendency, # of zeros, and # of peaks. The outcome column included three numbers 1,2 and 3 indicating if the fetal prediction with 1 showing healthy, 2 showing fetal with a possible disease, and 3 representing fetal with a definite disease. The IBM program read the data and split it in 90:10 ratio for the training and testing purposes. The choice of the algorithms was left on IBM machine to propose various models. In the current work, random forest classifier and decision tree classifier were used with and without enhancements. Without enhancements, the model based on random forest classifier gave a model with accuracy reaching to 93.3% where the corresponding value for decision tree classifier was 90.2%. The enhancements improved the accuracy for the decision tree classifier to 92.2%. Further analysis of results revealed percentages of various features in predicting the outcome of the model. Prominent features selected by both algorithms included %_of_time_with_abnormal_long_term_variability, histogram_mean, abnormal_short_term_variability, accelerations, historgram_median (100%, 99%, 75% 68% and 48% respectively for random forest classier). Similar selection with slight variation was observed for the decision tree classifier.



In conclusion, this study aims to demonstrate the effectiveness of using IBM Watson to predict fetal health based on factors like fetal movement and uterine contractions. By identifying patterns in these data, we hope to make more accurate predictions about fetal health and ultimately help to reduce maternal and infant mortality rates. Final poster will include all the information related to our research methodology, and IBM models that were developed in this work.
Speaker
Speaker biography is not available.

Geometry and Origami

Rishi Balaji (RJGrey Junior High School, USA)

0
Origami is the ancient art of paper folding. Aside from being a popular form of art, origami is used in many fields of Science, Technology, Engineering, and Math (STEM). From an early age, I have been fascinated by origami and have folded many models, ranging in levels of difficulty. Another area of interest of mine is mathematics. I am very interested in the various properties and uses of math in real life.
In this case, math and origami meet, involving numerous geometric concepts. This poster paper will cover some proofs and explanations of different geometric techniques used in various origami models. For example, the folding of a square into any number of divisions using diagonals involves similar triangles, while folding a strip of paper to make equilateral triangles uses 30-60-90 triangles.
Speaker
Speaker biography is not available.

Plantis: Floating Greenhouse

Simeon Wan To Suen, Ka Lun Tang, Hoi Ching Leung and Zi You Jasmine Siaw (Bishop Hall Jubilee School, Hong Kong); Man Kin Cheng (Bishop Hall Jubilee School & BHJS, Hong Kong)

2
Nowadays, the impact of global warming becomes more and more significant. For example, sea level has been rising rapidly, causing many agricultural lands to be inundated. And extreme weather like hurricane has also appeared more and more frequently in US as well as worldwide.

On the other hand, the demand of crops around the globe is increasing with the rising population growth. Together with the disruption of supply chain and logistics, the food prices skyrocket recently. In addition, the situation is worsening with tightening of geopolitics such as Russo-Ukrainian War. Therefore, due to the inadequate supply of agricultural land, we have to explore new farmland with minimal transportation. Hence we propose the use Plantis, a floating greenhouse, to adapt to such change given in this era.

Plantis aims to make use of the inundated area to plant crops. The system will first absorb seawater through a cotton wick that's made from old clothes. And then, we can obtain fresh water from the sea simply by distillation that made use of the natural heat source, sun, and room temperature. Which does not require any additional artificial energy sources. Besides, we have applied new technology like the ESP32 camera, water level, humidity, and temperature sensors, so as an electric valve. With the use of such IoT (Internet of Things), farmers can monitor their crops remotely and control the amount of water inflow, so that the crops will not be flooded. And these devices are all powered by the solar panel above to achieve zero carbon emission.

To observe the effectiveness of the system, we have monitored the growth of Dazzling Blue Kale and the seedlings of Lacinato Dinosaur Kale inside the greenhouse for 8 days. We can see that both species grow significantly. This means that the salinity of seawater does not affect the growth of vegetation. Instead, it seems that fresh water has been successfully obtained and supports their growth. In other words, Plantis succeeded in providing a suitable environment for the growth of plants.
Speaker
Speaker biography is not available.

The Importance of Experiential Learning

Yingyi Wei (China)

0
With the advancement of the world and the improvement of the economy, an increasing number of schools begin to advocate for experimental learning. The primary focus of this project is on how practice can supplement theoretical learning. I observed the students around me effectively picking up knowledge after having practiced when we were conducting physics experiments, which then made me intrigued to doing research on it. I want to emphasize the value of practice in the learning process. Finally, I'm seeking to understand how practices and everyday activities impact how the brain processes information.
Speaker
Speaker biography is not available.

Simulation of Basketball Shooting Process and Investigation of the Optimal Shooting Speed and Angle Using Mathematical Models

Enze Danny Zhang (Beijing 80 High School, China); Rui Wang and Haoran Zhang (China)

51
Basketball has been one of the most popular sports in the world. A lot of basketball amateurs have been practicing to improve their shooting precision. Shooting precision is determined by three major factors, i.e., body factors (such as body position, arm's stiffness, and damping factors), shooting angel, and shooting speed. Existing research has been analyzing the optimal shooting angle given a random position. However, there's still a lack of effort that comprehensively considers the intertwined relationship of the three factors. Therefore, in this study, we developed a computer-vision-based mathematical model to simulate the shooting process and predict a successful shot. First, a dynamic model was developed to simulate the shooting process of an arm and analyze the influences of the body factors on the shooting speed and angle of the ball. Second, a kinematic model was developed to simulate the ball trajectory given the ball's initial speed and angle, thus to derive the optimal shooting speed and angle given the shooter's position. Finally, a computer vision model was developed to analyze the shooting process videos, and predict whether it is a successful shot based on the ball's initial speed, angle, and the releasing height and position of the shooter. The results indicate that the proposed model can effectively learn the influences of different body's factors on a person's shooting speed and angle, and identifies the optimal solutions according to the height and distances from the basket of each player. Therefore, this model can be used to improve the shooting percentage of basketball players, and the authors aim to expand the study to model the whole body shooting posture in the future.
Speaker
Speaker biography is not available.

Local Teachers' Satisfaction with and Perceptions of Voluntary Teaching Programs and Their Instructional Practices in Rural China

Siyu Liu (Shenzhen College of International Education, China)

9
Voluntary teaching programs are becoming increasingly popular in rural China in order to reduce the gap in educational resources between urban areas and backward areas in China. Most existing papers studied the impacts of voluntary teaching programs on local students, while local teachers' satisfaction and perceptions of these programs are always overlooked. On the one hand, volunteers bring fresh energy, innovative teaching materials, and modern teaching techniques to local education, and they help to reduce the teaching burden of local teachers and reach their teaching targets. On the other hand, some studies found that some volunteers have poor class management skills and could mislead local students' view of right and wrong, which might leave a negative impression on local teachers. This paper aims to investigate local teachers' attitudes and perceptions of voluntary teaching programs and their instructional practices. The research questions are:
Do volunteer teaching programs affect local teachers' satisfaction with these programs?
How do local teachers perceive volunteer teachers' teaching quality (i.e., classroom management, teaching content)?
Do volunteer teaching programs affect local teachers' relationship with students (i.e., perception of students' closeness with volunteer teachers and local teachers)?
Do volunteer teaching programs affect local teachers' instructional practices (i.e., teaching schedule; class contents; effectiveness of communication)?

Online questionnaires were delivered to local teachers who have participated in voluntary teaching programs. Participants were reached by convenience sampling and were asked to respond to multiple choice questions, questions with rating scales, and open-ended questions. The data include demographic information of local teachers, their satisfaction with volunteers and programs, their perception on the relationship between local students and volunteers, as well as the degree of disruptions on their teaching schedules. I analyzed data descriptively using mean and standard deviations in Stata.

I found that there were both positive and negative impacts of volunteer teaching programs on local teachers. Local teachers were generally satisfied with volunteers, their class management ability, and the voluntary teaching programs. Moreover, local teachers felt that their communication with volunteers was efficient. However, it was also reported that there should be more frequent communication between local teachers and volunteers, and local teachers' teaching schedules were disrupted, which brought inconvenience for their teaching progress.

Taken together, this study has the following policy implications. First, volunteers should contact local teachers before voluntary teaching programs, such as online meetings to have a better understanding of the real circumstances of local education and prepare for class contents. Second, frequent communication between local teachers and volunteers is needed before and during the voluntary teaching programs. Lastly, further research is needed to reach a larger sample size, increasing generalizability.
Speaker
Speaker biography is not available.

Performance Improvement of Table Tennis Server and Intelligent Training System

Lijia Shen (High School, China)

7
Recently, an increasing number of automatic table tennis serving machines are entering the market, which offers alternative options for table tennis players to train and improve their skills. However, it is still unclear whether these automatic serving machines are effective at improving players' skills. To better understand and quantify the effectiveness of existing table tennis serving machines, this research project conducted a market survey and four customer interviews to determine customers' preferences for existing products, as well as any gaps in product features. An improved design was then proposed to address the drawbacks of current products. Results of the market survey indicated that there are three main types of serving machines, portable, fixed, and table tennis serving robots. However, many customer pain points remain unresolved; the top three limitations include: 1) lack of variation in spin on the ball and path of the service, 2) difficult to operate, and 3) no feedback for users to improve. These results indicated that customers need a more "human-like" server that is easy to use and can also provide feedback for improvement. A new automated table tennis training system is thus prototyped. The system consists of an off-the-shelf automatic serving machine modified such that all of its motion axes were independently controlled via an embedded system of Arduino, and programmed with improved serving routines to achieve a more human-like serving. Future improvements to the system will include acoustic or visual sensors to detect the successful rate of users' return, and algorithms to adjust serving routines automatically based on sensor measurements.
Speaker
Speaker biography is not available.

Robotic prosthetics

Man Hin Cheung, Hoi Lam Wong, Ka Yip Li and Anson Ngan (Hong Kong); Man Kin Cheng (Bishop Hall Jubilee School & BHJS, Hong Kong)

0
We are working on a project, aiming to create cheap and accessible prosthetics using motors and strings to act as muscles in body parts.

For the motors, we decided to use servo motors as they are light, and their degree of turning can be easily controlled and limited by coding. We are using an Arduino board for our prototype, but we may change this due to its size.

In order to control movement, we tested buttons, microswitches, slide switches, and toggle switches. Slide switches are not favorable to us as it is hard to activate. They are not convenient to be used to control the servo motors. Moreover, we would like to let the servo motors have different degrees of movement. Hence, switches that can only send on-off signals are not favorable. Rotary Switches and rotary resistors seem like our only choice. However, they are bulky and we would like them to be portable.
We are using different body parts to control the movement of servo motors. Since we read a few sci-fi books, we decided to control the movement of servos by less-used body parts such as the jaw, toes, and eye muscles. To fulfill our imagination, we used stress sensors on those body parts for signals to control the motors. However, our school does not have stress sensors, so we made our own to test them out. We tried using a pressure-sensitive conductor sheet (velostat) to create a pressure sensor, which worked very well. However, it is expensive and unstable as we ordered it from Taobao (the change of resistance differs greatly for each sensor). After knowing how velostat works (Wikipedia: Velostat, also known as Linqstat, is a packaging material made of a polymeric foil (polyolefins) impregnated with carbon black to make it electrically conductive.), we decided to try coating paper with carbon by pencil lead (12B which conducts electricity better than other pencils lead) and it also works well. We found a cheaper version of a homemade stress sensor. The stress sensor consists of two copper or foil strips and a pencil coated paper as the sensor is bent, the resistance of the strip will decrease and hence, we can use the change of resistance to control the movements. The board will read the change and hence, cause the servos to contact. We would also like to add a mini-game to let it play rock paper and scissors, pushing buttons to demonstrate its movements.

In the future, we would also make a whole arm. We would like to improve the design and help more people as it's the aim of this project. We would also like to try using EMG to control it, but we have to conduct more research. We are unsure whether people born without arms can have those neuron motors.
Speaker
Speaker biography is not available.

Improving chess player skills with studying tactics comparison between chess Grandmaster and chess engines

Jinshang Li (PRISMS High School, USA)

0
This study aims to investigate the effectiveness of studying tactics in improving chess player skills by comparing the tactics used by chess Grandmasters and AI-based chess engines. The study will involve a sample of chess players at various skill levels, who will be divided into groups and assigned different tactics and training methods. One group will study tactics used by Grandmasters, another group will study tactics used by chess engines, and a control group will not receive any tactics training. The study will measure the participants' chess skill levels before and after the training period, and compare the results between the groups. After all that is done, chess masters will offer interpretations of chess engines' moves to better their tactics. The findings of this study will provide valuable insights into the most effective tactics and training methods for chess players, and may also have implications for other domains where strategy and decision-making are important.
Speaker
Speaker biography is not available.

Effective Methods of Detection and Prevention of Falling Over by Using AI

Qinuo He (PRISMS High School, USA)

0
Artificial intelligence and several sensors integrated into the belt and clothing detect changes in the human environment and movement to analyze whether a fall occurs. If this behavior is detected, the clothing and belt will take protective measures to avoid injury, the device will analyze physical signs and determine if the wearer needs assistance and rescue. The main objective of this research is to achieve as much accuracy and no misjudgment as possible through the simultaneous operation and monitoring of several sensors while improving and integrating the sensors to achieve the lowest cost and mass production. The main work of artificial intelligence in this research is data analysis and processing.
Speaker
Speaker biography is not available.

Designing a Sensor Embedded Tracksuit using Arduino MCUs and Accelerometers to Model Kinesiology of Athletes

Shaunak M Marathe (JHU APL, USA)

0
Problem:
Today, in modern times, every athlete yearns the opportunity to become a better and more stronger version of themselves with immediate hopes of improving their unprecedented skills and multitude of fitness levels. Many have the challenge of how or what they specifically have to improve on and because of this, they fail to understand their full potential by becoming figuratively frozen in the process. Not knowing how to improve and what areas they are stronger/weaker in can be a detrimental factor for every athlete. Whether it be in sports that require immense stamina, skills, or patience, perfecting the specified sport form and understanding the in-depth statistics will improve any athlete's performance and game confidence exponentially, making this one of the hardest problems in athletes today.

Solution: With cutting edge, modern wearable technology, every athlete can benefit from information honed around improving their technique and physical movements in their designated sports area. In order to tackle the given problem, I decided to create a tracksuit model/prototype that would use sensors attached to a person's limbs (arms and legs) to imitate the physical motion and proper technique of designated exercise such as a workout routine, high endurance game, or just a simple walk.
Speaker
Speaker biography is not available.

Genomic Curation for Improved Marine Mammal eDNA Classification

Christopher Li (The Johns Hopkins University Applied Physics Laboratory, USA); Olive J Lara (Johns Hopkins University Applied Physics Laboratory, USA); William Joseph Ross III (John Hopkins Applied Physics Laboratory)

0
Environmental DNA (eDNA) is a promising tool for monitoring species abundance in the environment. However, there is still work to be done to improve the accuracy and reliability of this technology, which can be affected not only by the starting amount of DNA and wet lab processing, but also by the bioinformatics classification algorithms and reference databases chosen for taxonomic analysis. Additionally, certain taxa can prove more difficult to genomically classify than others due to a number of hurdles such as lack of assembled sequence references, repetitive regions, and high intraspecies diversity. For this reason, our team has focused on improving classification of species in the Delphinidae family (oceanic dolphins). Due to recent, radiative species divergence, as compared to other marine mammals, members of the dolphin family display high rates of intraspecies diversity, making classification based on single reference genomes per species difficult. To address this classification ambiguity, our team constructed a new marine mammal mitogenomic reference database which contains all publicly available mitochondrial genomic data for each species. Our team collaborated with The National Aquarium to procure positive control eDNA samples for sequencing, queried NCBI's GenBank for mitochondrial reference sequences, and employed the Kraken2 algorithm to build our database and classify our samples. We added 19.564 sequences to our custom Kraken 2 database and observed improved species-level classifications of Atlantic bottlenose dolphins (Tursiops truncatus) through use of the expanded database. Future work aims to continually assess methods to increase species-resolution classifications of marine mammals in eDNA samples.
Speaker
Speaker biography is not available.

Precision Medicine in Lung Cancer

Yuchen Ye (China)

6
Lung cancer is the second most common cancer with the lowest survival rate of 18% compared to other common cancers. There are 2.3 million new cases each year and a majority (85%) of them are non-small cell lung cancer (NSCLC). There are different molecular sub-types of NSCLC with different biomarkers. Although immune checkpoint inhibitors (ICIs) including monoclonal antibodies against programmed death-1 (PD-1) and programmed death ligand-1 (PD-L1), have significantly improved the survival and quality of life of a subset of NSCLC patients, the identification of patients who will obtain the best benefit from ICIs is still an unmet need and targeted therapies against various biomarkers are significantly wanted. Thus, precision medicine which aims to provide customized treatment strategies based on genetic testing, individual habit (like smoking), etc., could potentially select the suitable therapy regime with high efficacy and reduced side effects to patients. To identify the feasibility of using precision medicine on lung cancer treatment, this research is based on a comprehensive literature review and conducted a data analysis of different research results, articles and data. Firstly, through study of PD-L1 expression as a predictive biomarker, ICI efficacy may be evaluated. Immune checkpoints engage when proteins on the surface of immune cells called T cells recognize and bind to partner proteins on other cells, such as some tumor cells. These proteins are called immune checkpoint proteins. When the checkpoint and partner proteins bind together, they send an "off" signal to the T cells. This can prevent the immune system from destroying the cancer. Secondly, the discovery of novel biomarkers may further categorize patients and develop new targeted therapies. Lastly, patients who showed poor response to ICI may show better response with the combination of chemotherapy and ICI. With thorough information on immune profile, genome alteration as well as individual risk factors, precision medicine will dramatically improve and save a great many lives worldwide.
Speaker
Speaker biography is not available.

Flow of beads in a viscous film on vertical fibers

Leonardo Dobrinsky (USA)

0
People are usually fascinated by things that are unusual or different. In many cases, physics may be studied because it satisfies people's curiosity or because it is fun. For example, people studied static electricity to perform magic. Einstein studied physics because he wanted to better understand how the world works, not because he wanted to build a particular machine. I think learning about physics is exciting, fun, and somewhat educational. The goal of this presentation is to see how beads of a viscous fluid film flow down strings. In this presentation, I am going to show my experimental setup that includes a bucket, strings, and oil under a bright LED display.
This setup highlights the interesting behavior of beads in a viscous film sliding down thin strings. The experiment shows that even a relatively simple system can exhibit complex behavior. For example, beads can slide in unexpected ways. By observing the experiment, I hope that you will find the joy and beauty in physics.
Speaker
Speaker biography is not available.

Academic Stress, Parental Expectations, and Sleep: A Daily Diary Study Among Adolescents

Melinda Yu (USA)

28
Introduction: Adequate sleep is essential for adolescents as it has been linked to various developmental outcomes (e.g., mental health, cognitive function, physical health). The current study aims to investigate factors that may contribute to adolescents' sleep. Specifically, academic stress has been negatively associated with sleep quality and length, yet less is known about how school and family intertwine to influence sleep. Parental expectations on adolescents' academic performance may also have an impact on sleep. More importantly, parental expectation is also an indicator of family relationships and family support which are critical resources that may buffer the effect of academic stress on sleep. Furthermore, sleep fluctuations are often caused by poor sleep quality, and higher sleep fluctuations among a certain period may indicate poor adjustment ability and more disturbances in sleep. In sum, this study examines the association between academic stress, sleep, and sleep fluctuation, as well as the moderation effect of parental expectation on such associations.

Methods: Thirty high school students (aged between 15 and 17, 77% female) were asked to participate in a 3-day daily diary survey in which they reported their daily academic stress and sleep. Academic stress was measured by daily study time (in hours), number of school problems, whether participants had testing (yes/no), and overwhelmingness of workloads (1 (not at all) to 5 (extremely)). The current study calculated the average score of academic stress and sleep across the three days to indicate participants' experiences during the study period. The standard deviation of sleep quality was calculated to represent the fluctuations in sleep.

Results: On average, participants self-rated their sleep quality as 2.63 (possible range = 0-5), representing a relatively poor sleep quality. Their sleep length on average was 6.95 hours (range = 4.95-11.53 hours). Multiple regression showed that longer study time and a more overwhelming workload on a certain day were associated with poorer sleep quality. Facing more school problems was associated with shorter sleep. The study also found that higher parental expectation was associated with poorer sleep quality. Yet parental expectations did not moderate the effects of academic stress on sleep, meaning that academic stress has an adverse effect on sleep regardless of parental expectations. None of the academic stress indicators were associated with sleep fluctuations.

Discussion: The findings suggest that adolescents may face problems related to sleep quality and sleep length, especially when they perceive large academic stress and higher parental expectations on their developmental outcomes. Academic stress and parental expectations act dependently to influence sleep, such that adolescents with higher parental expectations also experienced more overwhelmed schoolwork. In short, the current study highlights the importance of both school and family environments on adolescents' daily activities and health, as well as provides implications for more integrative perspectives of adolescents' development.

Conclusion: Higher academic stress had an adverse effect on adolescents' sleep quality and length. The strength of such associations does not vary on parental expectations, yet higher parental expectations itself also had a negative effect on sleep quality.
Speaker
Speaker biography is not available.

Advancing Knee Arthroscopic Surgeries with Endoscopic and B Mode Ultrasound Imaging

Catherine Ren (Havergal College, Canada); Yining Zhang (University of Toronto Schools, Canada)

5
Knee arthroscopy is a minimally invasive surgery where surgeons utilize arthroscopes to visualize the internal structure of the knee to evade the need of large incisions. Owing to its effectiveness in reducing pain, knee arthroscopy is one of the most used surgical procedures for treatment of various knee injuries, which is currently performed around 750,000 times each year in the United States. However, from an analysis of 9 studies, the pain reduction offered by knee arthroscopy is found to be temporary, with most participants reporting the benefits of the surgery did not last for more than 24 months. This is partially attributed to the lack of visibility: current arthroscopes employ optic fibres to transport two dimensional images of the knee to a monitor screen, which provides only superficial morphology of tissues and hence leads to inaccuracy procedures.

To solve this issue, combining ultrasound imaging in conjunction with the traditional optical modality has been investigated. An external ultrasound can allow the surgeon to track the whereabouts of the surgical instruments inside the knee, while an ultrasound arthroscope using a modified intravenous ultrasound can allow the surgeon to receive depth-resolved information, such as evaluating the properties and integrity of cartilage and tissue around the joint in three-dimension (3D) structure. While current studies in this field focus on either using an external ultrasound or an adapted intravenous ultrasound to be used inside the knee cavity, the specific combination of both an external and arthroscopic ultrasound offers great promises for improving knee arthroscopy procedures with comprehensive information. However, designing and fabricating an internal ultrasound imaging device for this specific purpose is resource-intensive and costly.

In this work, we developed a simulation program to mimic ultrasound arthroscopy implementations for optimizing the design of ultrasound arthroscopy devices for knee surgery. The simulation was built in MATLAB on the basis of an ultrasound propagation simulation tool-box (k-wave). A series of new functions were developed to build ultrasound transducers for imaging from inside the knee. A 3D model of the human knee was developed in SketchUp. The 2D cross-sectional images were captured from this model and then uploaded into the simulation program as imaging targets for ultrasound imaging tests. Ultrasound images with high consistency to targets were received in the simulation. The simulation program can be easily modified to fit specific needs and correct small problems, which allows researchers to test the effectiveness of the proposed designs without spending large amounts of money to create a genuine working model. The optimized combination of using endoscopic (internal) and B-mode (external) ultrasound imaging is a new design that can create positive outcomes for many different stakeholders. Patients will benefit from an improved surgery procedure which ensures less error and a better outcome. It is anticipated that this study will serve as a stepping stone for future research and eventually test trials that employ the dual ultrasound devices method in clinical settings.
Speaker
Speaker biography is not available.

Toward an Energy Saving Smart Campus - IoT Smart Light Switch

Tsz-Him Ma and Yuen-Ning Poon (Cognitio College Kowloon, Hong Kong)

1
Internet of things (IoT) devices have been widely adopted for energy-saving applications, including lighting. Inspired by a real-life problem on our campus, we created an IoT Smart Light Switch, an intelligent campus IoT device that aims to reduce the campus's energy consumption. In this poster, we present the background and motivation behind creating the IoT Smart Light Switch. The IoT Smart Light Switch is inspired by the lighting system installed on our campus, a traditional lighting system with a time switch. The corridor lights remain on even though the surrounding is bright, and nobody passes by, resulting in a waste of electricity.

We also introduce the details of the design and construction of the IoT Smart Light Switch. It was designed for an environment with abundant natural light, such as a corridor or semi-open areas. It is a WIFI-based smart switch that senses the presence of a passerby, the ambient light level of the surrounding environment, and the present time. It was built with a passive infrared sensor (PIR sensor), ambient light sensor, and WIFI-enabled ESP32 microcontroller, shielded by an outer case made of acrylic plates. The IoT Smart Light Switch connects with the Smart Relay and the cloud server via WIFI, and it transmits the switching command to the Smart Relay to turn on/off the light. The light control is achieved through three parameters: ambient light level, time of the day, and human presence detection. For example, lights are turned off when the ambient light is low outside working hours with no human detected. The IoT Smart Light Switch also periodically uploads data to the cloud server, such as light level, passerby presence, and light power status. An end-user can access the cloud server to retrieve data, access the dashboard, and control the IoT Smart Light Switch through their mobile devices.

Finally, the poster includes the results and future work of the project. Since the project is still a work in progress, we present an estimation of the energy and electricity bills that could be saved by adopting this prototype. Future plans include (i) conducting a trial in the school corridor or podium to compare the trial result with the estimation of electricity use, (ii) improving the IoT Smart Lighting Switch based on the trial result, and (iii) exploring other potential applications.
Speaker
Speaker biography is not available.

Can Deep Learning Models Trained on Small and Imbalanced Ultrasound Image Samples Detect Polycystic Ovary Syndrome (PCOS)?

Sophia Y Liu (Cherry Hill High School East, USA)

1
Polycystic ovary syndrome (PCOS) is the most common endocrine abnormality in women and a leading cause for infertility, affecting approximately 15% of reproductive-aged women globally. PCOS can also lead to illnesses such as heart disease, endometrial cancer, stroke, and diabetes. Women with PCOS often develop small sacs of fluid, called cysts, in their ovaries, which can be identified through ultrasonography.

Several studies have applied deep learning methods to diagnose PCOS through analyzing ultrasound images. Recently, transfer learning has become increasingly popular for enhancing the performance of deep learning models. Transfer learning uses deep learning models pretrained on data from other domains to solve a problem in a new domain. This approach can be more effective because the pretrained model may store knowledge and information that could help solve the new problem. Last year, researchers began investigating the use of transfer learning in the diagnosis of PCOS based on large and moderate sized training sets (Suha and Islam, 2022 and Hosain et al., 2022).

My research aims to help develop a more thorough understanding of the effectiveness and efficiency of using deep learning to identify PCOS from ultrasound images. Specifically, the purpose was to shed light on the following questions: How do small and imbalanced training samples affect transfer learning? Will a transfer learning approach outperform a non-transfer learning approach? Will deeper neural networks result in higher accuracies?

I use four classical deep learning architectures, ResNet50, ResNet101, VGG16BN, and VGG19BN. The ResNet architectures have over 20 million parameters and the VGG architectures have over 130 million parameters. I implement a transfer learning model and a non-transfer learning model of each architecture. The transfer learning models were pretrained on the ImageNet which has over 16 million natural images but no medical images. The PCOS dataset has about 2000 images. All models are prepared using training and validation sets, and tested on the same test sets.

The experiments based on ResNet and VGG models generated consistent results. First, small and imbalanced training sets have minimal impacts on the performance of transfer learning models. Training sets of different sizes are used. The smallest, most imbalanced training samples only have 6 PCOS positive images and 60 negative images. All transfer learning models trained on these sets achieved average accuracies >99% on the test set of over 1500 images. Second, transfer learning models are more accurate and reliable, and have shorter and smoother training processes. Third, models with more layers don't perform better on this problem.

This is the first research that demonstrates transfer learning's capability of detecting PCOS using small and imbalanced samples of ultrasound images.

References

Suha, S.A. and Islam, M.N., 2022. An extended machine learning technique for polycystic ovary syndrome detection using ovary ultrasound image. Scientific Reports, 12(1), p.17123.
Hosain, A.S., Mehedi, M.H.K. and Kabir, I.E., 2022, October. PCONet: A convolutional neural network architecture to detect polycystic ovary syndrome (PCOS) from ovarian ultrasound images. In 2022 International Conference on Engineering and Emerging Technologies (pp. 1-6).
Speaker
Speaker biography is not available.

General Optical Properties of Two-Dimentional Materials & Applications in Optoelectronics

ZiRui Yu (High School, China)

7
Two-dimensional (2D) materials are now prevalent in many avant-garde areas, such as maglev trains and optoelectronics. Since the beginning of 21th century, physicists have been delving deeper into the various physical properties of 2D materials. Graphene, a single-layer honeycomb lattice of carbon, is one of the most typical 2D materials. Despite its various fascinating properties, graphene is a gapless semimetal. This stimulated the search for 2D materials with semiconducting characteristics and more prominent optical properties. Beyond the conventional means of tuning material properties, 2D materials unlock a new knob for tuning: twist angles. Twist angles enable the tuning of band structures and exciton properties in 2D semiconductors, and thus their optical properties. For example, twist angles create a new displacement between holes and electrons thus forming distinct absorption and emission spectrum, specifying the light frequency absorption range. This research is based on a literature review of the band structure of typical mono-layer van der Waals 2D materials, such as MoS2 and WSe2, and simulated the optoelectric properties of these 2D materials by analyzing the absorption intensity spectrum based on open data collected from the Material Project. The results of the research provided a better understanding of the band structure and applications in photo-detection of MoS2 and shed light upon the future use of these materials in optoelectronics.
Speaker
Speaker biography is not available.

Deciphering the Indus Script: Decoding Missing and Unclear Indus signs and Identifying Anomalous Indus texts from West Asia using Markov Chain Language Models

Varun Venkatesh (USA)

1
The Indus script developed between 2500 and 1800 BCE in the Indus Valley civilization in the Indian subcontinent and then died out. It has been deciphered yet and the language it encodes is unknown. Without decipherment, the details of this civilization have largely remained mysterious. Indus script texts found so far in the archeological digs include a lot of damaged artifacts with unclear and missing signs in the text. Identifying the missing and unclear signs and extending the Indus text corpus will hugely benefit decipherment efforts. While others have done work using the older corpus and simpler language models, this work aims at building advanced n-gram Markov chain language models using the latest ICIT text corpus and uses that to predict the missing and unclear signs and assist the catalogers. We also use the language models we built to recognize some anomalous Indus texts based on their geographical distribution.
First, we analyzed patterns and concordances of the signs, pairs, triplets, and other n-grams and discovered how the signs behave with respect to their positions in the Indus texts. We then did statistical analyses focused on text length and sign positional distribution and built a positional probability model. With the understanding of sign behavior, we built Markov chain language models based on n-grams, augmented with the positional probabilities of the signs. We also devised and implemented an effective sign fill-in algorithm on top of these Markov chain language models using model scores of snippets of n-grams. We find that a group of three signs in a cluster capture a lot of information when the signs appear in the middle of a text. Signs appearing in the leftmost positions are the most difficult to predict. Using the language models and the sign fill-in algorithm, we identified missing single signs in the test dataset and tuned our parameters to improve the accuracy to about 63%. Then we filled in the actual unclear texts with our predicted signs and published our predictions in the order of probabilities. This adds a wealth of information that was previously missing to the Indus script corpus.
Our results also show that the language model perplexity was high for several Indus texts that were found in the West Asian region in the contemporary bronze age civilizations of Sumer, Dilmun, and Elam. Some of these texts did not fit in well with the language model built with Indus texts from just the Indian subcontinent. From this, we conclude that the language in several West Asian Indus texts is quite different from the language used in the Indus script from the Indian subcontinent.
We believe that the sophisticated language models and algorithms that we developed give a better understanding of how the Indus Script behaves, add more complete texts to the Indus text corpus by filling in the missing signs, and postulate that the Indus script encodes multiple languages that varied by geography. We think these are significant advancements toward deciphering the Indus script.
Speaker
Speaker biography is not available.

Enhancing STEM Education to Communities with Low Access to STEM Resources

Christine DiMenna (Gilman School & QuarkNet, USA); Arya Kazemnia, Aman Garg, Leo Leo Wang, Abraham Karikkineth and Daniel Koldobskiy (Gilman School, USA)

1
In the Baltimore community, the supply of STEM education is being rapidly outpaced by demand. This year, our team sought to develop methods of tech instruction through collaboration across communities and groups as well as innovative systems which allowed for the same degree of learning with fewer resources. As a high school robotics team, we had access to many STEM resources, and we felt it was imperative to reach out and do our part this season, especially as, as little as 5 months ago, we had no outreach program whatsoever. Due to this, our success this year is an example of how outreach programs can be built with minimal resources and limited time, while still significantly aiding STEM education. On our way to developing this, we met with a series of roadblocks, including logistical constraints such as time, space, and transportation, which impacted both us and the communities we were attempting to impact. However, by partnering with existing organizations, using connections of our team members, and developing personal connections with our students, we were able to build our STEM education program from the ground up. Throughout this season of our robotics program, we have experimented with different levels of hands-on and virtual learning, have aided schools in Baltimore in developing STEM curricula and have organized workshops throughout our local community. We have shown that, through a combination of virtual and hands-on learning, strategic partnerships, and personal connection, an outreach program can be built with minimal resources and achieve a high degree of success.
Speaker
Speaker biography is not available.

Mathematics Model of Honey Bee Colony

Qingyuan Yao (China)

0
There was a wacky diminish of the population of honeybees called Colony Collapse Disorder(CCD), and it harms both nature and the economy of a society. We developed some models to find an intrinsic reason for this vast diminish and solved real problems about the decline of honeybees.
In the model, we drew upon a lot of academic papers and created a conclusion formula which displays the whole quantity of a colony. This model contains several factors which can affect the whole population. Apart from that, we considered about special winter behavior of honeybees, which can remarkably affect the final result.
Speaker
Speaker biography is not available.

Reimagining Seawalls: Exploring Shoreline Protection Methods with Minimal Surface Inspired Seawalls

Alex Yang and Michael Wen (USA)

0
Rising sea levels are an apparent and inevitable problem as we encroach further and further into the 21st century. Parts of world-famous cities like Tokyo, Mumbai, and New York city have already begun submerging, with many areas calculated to be fully submerged by the end of the 21st century.

With rising sea levels, waves and the abrasive energy that they bring disrupts the flow of everyday life. Waves erode shorelines which crawl closer and closer to vulnerable parts of these cities, causing insurmountable damage to the population and infrastructure. Traditional seawall designs are not only expensive and outdated, but also ecologically harmful by blocking much of the areas seashore organisms inhabit.

To help combat this problem, this study focuses on exploring more efficient seawall designs. Porous structures such as Triply Periodic Minimal Surface (TPMS) based seawalls have been chosen due to their mathematical simplicity, mechanical strength, cost effectiveness, accessibility, and ecological friendliness. This research will compare the seawalls made using different types of TPMS structures. TPMS structures are first created using MathMod and Blender using a mathematically defined equation. Computational Fluid Dynamics (CFD) tools such as Ansys Discovery and Fluent are used to investigate the potential performance of a new seawall design (water flow speed reduction, wave height reduction, etc.) with respect to engineering parameters of the seawall such as porosity, slope and spatial frequency. The results will help recommend the best paths to take in engineering the next generation of seawalls.
Speaker
Speaker biography is not available.

Evaluating the Effectiveness of Design Processes in Mechanical Engineering Applications

Diana N Omar (Johns Hopkins University Applied Physics Laboratory, USA)

1
Mechanical design engineering is a field that has generated an increasing hold on societal functionality with technological advancements. However, it is important to understand how these valuable ideas came to be and how more can be made. Without valuable technology, the vast impact among the intended user will not be actualized. There are many current mechanical design methodologies and processes that are meant to strengthen the user-developer relationship, but some are flawed in their outcome. Some key valuable components include empathy associated with Design Thinking, nonlinearity with the Engineering Design Process, collaboration in Design Reviews, and expertise in various roles. However, some drawbacks are weaker communication in the Engineering Design Process, and slight reliance on the user's opinion in Design Thinking. The purpose of this study is to evaluate the effectiveness of current mechanical design methodologies through observation and determine ways to promote valuable technological contributions to society with organization and impact. Each process has their benefits and drawbacks, but a synthesis of both methodologies can help to mitigate any problems threatening valuable contributions to technology.
Speaker
Speaker biography is not available.

Integration of Quantum Computing with Deep Learning

Amin Boukari (Caesar Rodney High School, USA)

2
Machine learning is an algorithmic approach to modeling data computationally, by learning from features in a dataset. There is a growing interest in applying machine learning tools towards solutions to solve diverse problems related to multi-disciplinary fields. Deep learning is a subset of machine learning that specifically deals with modeling the data through a network of parameterized layers, called a Neural Network. The network architecture parameters are optimized through a process called training by minimizing a specific loss function through gradient descent algorithms. Deep learning techniques have been found to drastically improve prediction accuracy in many analytical models. Their predictive abilities are unmatched by today's technology. Quantum computing is a field of computing that involves using quantum particles, called qubits, to perform computation. This allows quantum computers to take advantage of quantum states, such as superposition and entanglement. Quantum computing is an evolving field that can be useful for many applications, including cybersecurity, encryption, medical research, and meteorology. In this, I propose to combine both Deep Learning and Quantum Computing by studying the integration of quantum computing methods with deep learning algorithms to improve the accuracy of the models. In this implementation, I added a parameterized quantum circuit that uses a specific number of qubits equal to the number of classes in a Convolutional Neural Network. The qubits in the circuit are rotated in the y-axis using a parameter theta, which is tuned through gradient descent during the training process. Limitations of latency and runtime from actual quantum computers make it necessary to run the quantum circuit in a simulated backend in the training system. To train and test this model, I used the CIFAR-10 dataset and a ten-layer Convolutional Neural Network. In this implementation, ten qubits were used to train and test a supervised classifier on the publicly available CIFAR10 dataset. While using a local simulated backend drastically improves training times, it still takes significantly longer to train than a completely classical network due to the sequential nature of quantum simulators and the inability to run them on GPUs, which necessitates GPU-CPU communications and poses a bottleneck in the training process. My results show significant improvements over the purely classical methods when using the same number of epochs and batches, even accounting for the bottleneck due to the simulated backend. This work shows that quantum computing can be successfully integrated with deep learning algorithms and shows promising results.
Speaker
Speaker biography is not available.

Machine Learning Predictive Model to Reduce the Harmful Environmental Effects of Pesticide Usage in Agriculture

Kareem Boukari (Caesar Rodney High School & Delaware State University, USA)

4
Many of the challenges we face today around sea level rise, food, extreme weather, water equity, invasive species and climate change have direct impact and serious consequence on all living organisms, our health, and the quality of life and future. Our environment is changing around us for the worse due to our own actions. Addressing these challenges requires taking steps and building tools towards its protection.

In agriculture, Crop and food production are necessary to provide supply, avoid hunger and inflation. To protect their crops, farmers need to use insecticides, pesticides, and nutrients. However, these chemicals are harmful to our health and ecosystem, as they pollute the environment. In agriculture, this is a trade-off between increasing crop production and reducing necessary land treatments. Reducing too much of these chemicals may lead to less production of food.

To address this challenge and avoid unnecessary excessive use of pollutants, I propose to build new predictive supervised machine learning algorithms based on decision trees, Support Vector Machine and Random Forest that will assist farmers to continue to use pesticides in a way that benefits both them and the environment. Using this tool, the farmers can reduce chemical and frequency usage based on the predicted outcome on their crop health ahead of time by setting some experiments where they minimize the environmentally harmful chemical usage as much as possible while keeping good and sustainable crop productivity.

The publicly available dataset used in this project is a three-class labelled dataset that contains the quantity of insecticides, pesticides, nutrients, and the soil category, frequency, season, and type of crop.

To choose the best model, I ran multiple experiments and compared different models using different parameters using Python scikit-learn library. For training, k-fold cross validation was used to split the data into training and testing sets.

Using Bayesian Classifier, we obtained an accuracy of 82%. The SVM was not able to separate the classes very well especially because we obtained null precision and recall for two imbalanced classes. The decision tree classifier led to 83% accuracy. We also conducted multiple experiments for each random forest using 200, 500, 700, and 1000 trees at different depths. The best average accuracy result of 89% accuracy was obtained using the XGBoost Random Forest with 1000 trees of depth 12. However, precision and recall for the 2 imbalanced classes were low. To overcome the data imbalance, further work needs to be done using data augmentation or under sampling.

In conclusion, the proposed method can be used as a strategy to convince farmers to reduce the harmful chemicals for the environment while keeping good crop productivity. Findings will provide novel insights to farmers about the extent to which crops can be exposed to pesticides before having a major crop damage and how they can be reduced while keeping good crop productivity.
Speaker
Speaker biography is not available.

Simulating Quantum Magnetism on Noisy Quantum Computers: An Analysis of Trotter-Suzuki and qDRIFT

Peter C Seelman (Johns Hopkins University Applied Physics Laboratory & Glenelg Country School, USA); Taohan Lin (Johns Hopkins University Applied Physics Laboratory & Thomas Jefferson High School for Science and Technology, USA); Milan Tenn and Samuel N Manolis (Johns Hopkins University Applied Physics Laboratory, USA)

1
Quantum computing is a new computing paradigm that offers more efficient solutions to computationally intensive problems including drug development, breaking RSA encryption, and simulating quantum mechanical phenomena. However, current quantum computers are noisy, meaning that errors in the implementation of gates and quantum bits interfere with their ability to perform computations accurately. We study quantum algorithms on noisy quantum computers for simulations of material properties, particularly quantum magnetic materials described by Ising and Heisenberg models. This paper investigates two methods for implementing Hamiltonian simulation, the First Order Trotter-Suzuki method and qDRIFT [E. Campbell, Phys. Rev. Lett. 123, (2019).], two of the leading algorithms for simulating the time dynamics of various material properties. The First Order Trotter-Suzuki (FOTS) method creates a deterministic quantum circuit by repeating a set of quantum gates for each simulation time step. qDRIFT creates circuits through random sampling and can potentially significantly reduce the number of gates needed to perform an identical quantum simulation implementing FOTS. We examined the efficacy of the qDRIFT algorithm compared to the FOTS method with and without noise and compared the resiliency of these algorithms to the effects of noise. We ran tests via classical simulation and IBM Quantum hardware using the Ising and Heisenberg models for quantum magnets. Due to the limitations of publicly available quantum computers, we used an Ising chain with a maximum of six qubits. We tested a variety of parameters: (i) fixed time of evolution with a fixed initial state and observable, (ii) randomized initial state with a fixed observable, and (iii) randomized initial state and observable. When the initial state and observable were fixed, qDRIFT on noiseless simulation was found to have greater algorithmic error than FOTS. However, on IBM Quantum systems, the difference in accuracy between qDRIFT and FOTS was lower. This indicates that while the efficacy of qDRIFT was worse without noise for the models we studied, qDRIFT is less affected by noise. After the initial state and observable were randomized, the difference in error between the two algorithms significantly decreased. In addition, we showed that in cases where the Heisenberg model has one dominant ferromagnetic coupling interaction and other weaker interactions, qDRIFT could outperform the FOTS method. In future research conducted on more powerful computing hardware, we aim to test qDRIFT when applied to Hamiltonians with more terms and few dominant ferromagnetic interactions like those relevant to quantum chemistry.
Speaker
Speaker biography is not available.

Novel Medical Sensor Design For Mass Casualty Triage and Trauma Care

Diya Sharma (Johns Hopkins University Applied Physics Laboratory, USA)

0
Mass casualty incidents (MCI) are large-scale accidents that can result in trauma or casualties. These events often overwhelm hospitals because of the demand for equipment and trained personnel. Currently, the triaging system uses paper tags to categorize patients into four groups: deceased, emergent, delayed, and minor. The tags allow first responders or EMTs to direct attention and resources to patients in critical condition, but it can be chaotic with first responders rushing to rapidly screen, categorize, and transport patients and tags are not frequently updated as statuses change. In order to improve the current triage system, our project replaces paper tags with an electronic tag that can be quickly placed on patients and used to actively monitor their health status at the scene of the MCI. This project is focused on 2 main tasks: 1) identifying and designing an electronic sensor package that collects critical vital signs of trauma care patients and 2) redesigning an original prototype using a custom printed circuit board (PCB) to create a more compact and adhesive tag. By utilizing mechanical design tools like SOLIDWORKS and Onshape, we were able to implement an iterative design process and improve our electronic sensor tag's packaging. We also used the Arduino IDE to program the tag with open-source libraries from various vendors, and low-cost electronics to create our proof of concept. We identified blood oxygen levels, heart rate, temperature, and motion as being the most critical health metrics monitored in trauma patients based on the first responder care decision tree. After programming and integrating the electronics for the system, we produced a basic 3D printed case that combined our microcontroller and sensors into a single package and had an LED to signal the status of the patient. Looking to minimize size, weight, and usability, we designed our own PCB using KiCAD to only include the necessary components of our electronic tag. As a result, we were able to design a compact and adhesive electronic tag that can be easily placed on patients, efficiently collect vital signs, categorize patients into priority groups, and indicate patient status using a LED visual for first responders. This work is a basis for novel mass casualty triage tagging opposed to the current method of paper tagging. We hope to expand the project to search and rescue missions as well as military medicine, by combining the emplacement of tags onto patients with independent robots in future work.
Speaker
Speaker biography is not available.

Quantum Noise Mitigation Via Randomized Compiling Abstract

Harry Rathbun (Johns Hopkins University Applied Physics Laboratory, USA); Alex J Zhang (Johns Hopkins Applied Physics Laboratory, USA); Colin La and Kenji Ishi (Johns Hopkins University Applied Physics Laboratory, USA)

0
Authors: Harry Rathbun, Alex Zhang, Colin La, Kenji Ishi
Mentored by: Tom Gilliss, Gregory Quiroz, Paraj Titum, Leigh Norris
Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, 20723, USA

Quantum computers harness properties of quantum mechanics to make complex calculations that classical computers cannot. Thus, quantum computers have the potential to solve problems that today's best supercomputers cannot, such as problems in drug development, computational biology, prime factorization, and optimization. However, current quantum computers are greatly hindered by error, also known as noise.
There are two primary types of noise: coherent and stochastic. Coherent noise is error created by the environment and systemic flaws. Some causes of coherent noise are detuning, calibration errors, and crosstalk (qubits interacting with one another in an uncontrolled way). Stochastic noise is random error. It is often caused by fluctuating fields in the environment or interactions with other systems. Errors caused by certain types of stochastic noise can be corrected by quantum error correction (QEC), a technique that uses redundancy to protect the information stored in a quantum computer. The same is not true for coherent noise, which generally leads to the highest error rates under QEC.
Randomized Compiling (RC), which was introduced by Wallman in 2016, transforms coherent noise into a type of correctable stochastic noise [1]. Our objective is to study the effectiveness of RC on both coherent noise and stochastic noise. RC effectively alters the noise by inserting random gates into a quantum circuit. Since RC is random, it can produce the desired outcome after averaging over many circuit evaluations. Importantly, RC keeps the circuit logically equivalent and does not extend the circuit length. Unlike other error mitigation methods, it can be adapted to many different types of quantum circuits.
Using the Python library Qiskit, which is an open-source software development kit for working with quantum computers, and IBM's free online quantum computers, we simulated RC on coherent and stochastic noise. Our experiments showed that RC reduced the overall error. We found that the error for the bare circuit without RC grew exponentially as circuit depth increased, while the error for the RC circuit grew linearly. The standard deviation of outcomes was greater for the RC circuit due to the randomness. Despite this, the RC circuit clearly showed superior performance to the bare circuit.

[1] J. J. Wallman and J. Emerson, Physical Review A 94, 052325 (2016).
Speaker
Speaker biography is not available.

First Ever Whole Genome Sequencing and De Novo Assembly of the Freshwater Angelfish Pterophyllum Scalare

Indeever Madireddy (USA)

0
The freshwater angelfish, Pterophyllum scalare, is a popular freshwater cichlid kept by aquarium hobbyists around the world. Originally from South America, these fish are well known for their monogamous breeding patterns and thorough parental care of offspring. Although the behaviors of the angelfish have been well studied, very little is known about the nuclear genetics of the angelfish as its genome has never been fully sequenced and assembled. Cichlids are of especial importance to biomedical research, for they have been used as model organisms to study craniofacial variation and neurobiology. Investigating the genome of the angelfish may enable its use as a model organism for further biological research. In this work, I sequenced, assembled, and annotated the complete genome of the freshwater angelfish in addition to the full mitochondrial genome with Oxford Nanopore Technologies.
With the MinION MK1B device, 6.94 million sequencing reads and an estimated 10.1 gigabases at a 3.24 kb N50 read length were collected. Two flow cells were used to collect this sequencing data, and the flow cells were run for 72 hours each. The reads collected had a mean read quality of 15.06 and a median read quality of 14.58, corresponding to an estimated 97% sequencing accuracy. Reads were collected at an average translocation speed of 220 bases per second.
Collected reads were then screened to identify potential contaminant organisms in the sequencing data. The kraken2 tool identified that Pseudomonas aeruginosa, a common opportunistic aquatic pathogen, was the largest contaminant of the sequencing reads.
The mitochondrial genome of the angelfish was assembled from the sequencing reads. All 37 conserved mitochondrial genes, including 2 rRNAs, 13 genes, and 22 tRNAs common to eukaryotic organisms, were identified, indicating a complete and robust assembly. This new assembly was 25 bp longer than the reference mitochondrial assembly, with a 99.1% similarity.
The final nuclear genome assembly consisted of 15,486 contigs totaling 734.79 Mb with a final BUSCO score of 86.5% and a 41% GC content (Simão et al., 2019). The genome size and GC content is similar to other fish species, such as the Asian seabass and the Nile tilapia. The N50 contig length of the assembled genome was 96,962 bp, and the longest contig was 543,394 bp. Repeatmasker masked 12.47% of the genome containing simple repeat sequences.
NCBI blastp (ver. 2.12.0) performed functional annotation of the genome through the GenSAS platform. 24,247 unique protein-coding sequences orthologous to other species were identified in the angelfish genome against the refseq vertebrate-other database. Most genes, 59%, were orthologous to Archocentrus centrarchus, a closely related South American cichlid. Timetree suggests that A. centrarchus and P. scalare diverged between 28.7 to 72.4 million years ago.
Future work would involve RNA sequencing of the angelfish to build an appropriate transcriptome of the organism. Illumina sequencing could also be performed to improve the current assembly.
Speaker
Speaker biography is not available.

Chat Bot Implementation on Mattermost Servers Using APIs

Taylor Ann Benning (Johns Hopkins University Applied Physics Laboratory, USA)

0
Today, chat bots play a vital role on a variety of online spaces including online retailers and technical support. Although they have been largely utilized for simple and repetitive information sharing up to this point, many chat bot's capabilities have the ability to become more widely distributed through integration into chat servers such as Mattermost and Slack. Additionally, the introduction of ChatGPT (based on GPT-3) has shifted chat bots from repetitive sharers to active conversational participants. To research the capabilities of chat servers to host chat bots, we first launched a Mattermost chat server hosted on a Linux virtual machine. After setting up the server, we used the Mattermost API and the Mattermost Python Bot plugin to write code to implement two chat bots onto the server. Both bots are equipped with the capability to recognize when the user mentions them in the chat, but the two chat bots are of varying complexities. One chat bot understood only a handful of phrases, while and the other (ELIZA) utilized a separate library of prewritten responses to facilitate more advanced communication. Finally, we utilized the GPT-3 API to create the most advanced chat bot of the three. Although this last chat bot generated significantly more human-like messages than the other two, it still made frequent grammatical errors when responding. These various chat bots were all implemented into the chat server using almost identical techniques, revealing an easily repeatable method for AI capabilities to be showcased on widely used chatting platforms. Providing widely available showcases of different chat bot capabilities could serve as an impactful source of education on discerning AI generated text from human created text.
Speaker
Speaker biography is not available.

Min-Max Optimal Matching

Yibo Cheng (USA)

0
We design algorithms for the One-Sided Matching problem, in which a set of graduates must be matched to a set of jobs over which each graduate has a preference list. In particular, we study a novel criterion called Min-Max Optimality, which is achieved by the matching that gives the fewest graduates their ℓth choice; subject to this, the fewest graduates their (ℓ − 1)th choice, and so on, where ℓ is the maximum length of any graduate's preference list plus 1. In this paper, we give algorithms for this problem that combine classic results in matching theory to compute a Min-Maximal rank in time O(m·√n·log ℓ) and find the Optimal matching in time O(n3), where n is the total number of graduates and jobs.
Speaker
Speaker biography is not available.

The strategy formation process of publicly listed firms under the "Double Reduction" Policy - a pilot study of factors impacting firm survival

Leming Liu (China); Lufan Wang (Florida International University, USA)

57
The 2021 "Double Reduction" policy made by the Chinese government banning all K-12 (Kindergarten through twelfth grade) for-profit tutoring firms from operating in a profit-seeking model, vanished 2 billion dollars (over 10 billion RMB) from the market, causing destructive damage to both public and private firms. From December 2020 to December 2021, 40 listed public firms experienced a decrease of an average of 68.815% in their stock price. In this work, we focused on publicly listed firms to study the strategies of their business reconfiguration in the occurrence of an unexpected, drastic external shock, which forced them to change revenue models in no time.

By collecting data 71 publicly listed firms registered in mainland China under category of education, we found the following statements. We found 87.5% turned their K12 business into non-profit, and 10% of firms entered the market which is not relevant to general education marketing. Second, 30.7% of publicly listed firms claimed bankruptcy after a year of policy release. Third, 75% are still pivoting their potential sustainable business model, as well as dealing with customers' refund. Fourth, 15% have finished their navigation phase and entered alternative profit-seeking market.

By collecting, coding, and analyzing the data about firms' pivoting behaviors during the transition, as well as firms' demographic characteristics, we found that firms' accumulated assets and revenue model diversity both positively correlated to the possibility of firms' survival.

In theory, the work contributes to the literature regime of strategy formation under unexpected shock. It provided an extreme case of how market-wide publicly listed firms pivot their surviving strategies giving a very short period of time. In practice, it is the first academic analysis providing insights to Chinese policymakers and public firm stakeholders about the impact of the "Double Reduction" policy. It alerted the big firm's higher managing teams to be aware of potential political shocks.
Speaker
Speaker biography is not available.

Paving the on-ramp to AI learning in the classroom

James Murray (Holy Ghost Preparatory School, USA)

1
Autonomous technology is used in many different avenues of life. Industrial, Technological, and Automotive industries all use autonomous technology, with recent headlines containing the promise of fully autonomous cars in the near future. In this project our group utilized a resource called AutoAuto cars to control automotive exploration in virtual and classroom settings. Over the course of the school year we used a combination of physical cars and an online learning resource, AutoAuto Labs, to deepen our understanding of Python and and its applications for machine learning and artificial intelligence. Lessons included programming and artificial intelligence, computer vision, object detection, natural language processing, and intro to data science. We worked in a virtual environment and engaged with Github libraries to understand the code within and further our experience.

Using hands-on physical projects we were able to perform various tasks related to autonomous driving to establish a driving baseline for current capabilities for driving with code and object detection with frame recognition. These included cautious driving, screenshots from various reference points, color detection, and buzzer noises. We've applied Python programming skills to navigate many different virtual and physical challenges and have also designed custom challenges to create a fun learning process. All of these challenges are highly competitive among classmates as the title of the best programmer/driver in the class is always on the line. In the future we plan to further developing these projects using AI to focus on making the car self-sufficient so that it can demonstrate making decisions completely on its own without any human input.
Speaker
Speaker biography is not available.

Low-cost, High Accuracy Smart Parking Solution for Urban Areas

Vivek Pragada (Central Bucks South High School, USA)

1
Intelligent parking systems are essential for enabling sustainable parking solutions. Searching for a free parking spot in urban areas wastes a significant amount of driver's time and fuel, majorly contributing to total traffic congestion and resulting emissions. In urban areas, an estimated 45% of total traffic congestion is due to drivers looking for parking, with an estimated cost impact of $345 per driver because of wasted time, fuel, and emissions. In New York City, car drivers waste an average of 107 hours searching for parking. Automated and sustainable parking will be critical in our future, when 70% of the global population is anticipated to live in urban areas.

Vehicle presence detection is a fundamental aspect of intelligent parking systems - systems that would inform users about parking spot occupancy throughout their area in order to minimize wasted search time. This can only be efficiently accomplished by smart parking sensors that can convey real-time information about parking spot occupancy. One of the key requirements for smart parking sensors is the high accuracy detection capability of parking slot occupancy for various automobile makes and models, including recent electronic vehicles (EVs), under a multitude of practical parking events. Also crucial are long battery life, easy installation, and low maintenance, all of which need to be met under the strict constraints of low cost, high durability, and operation under numerous environmental conditions.

While several approaches are proposed by recent studies, most are either unreasonably expensive, require considerably high power consumption, or cannot provide the accuracy necessary for most practical parking scenarios. There is an urgent need for low-cost smart parking sensors that can provide high accuracy in almost any environmental conditions.

In this paper, we propose a smart parking sensor that consists of a magnetometer and a low-power wide area (LPWA) connectivity module. Unlike other state-of-the-art approaches, data from multiple parking sensors in adjacent parking spots are synthesized, dramatically increasing accuracy of detection. The accuracy of parking spot occupancy increases especially when the magnetometers are distributed evenly across parking spaces, fitting nicely with typical parking lot deployments. Our research shows that this technique helps in determining parking spot occupancy significantly better than independent sensors for practical parking events, including front-park, reverse-park, pass-through-park, double-park, and drive-by events, as well as various automobile makes and models, including EVs.

To implement this multi-sensor approach efficiently, the magnetometers cannot continuously broadcast its readings, and are configured with specific thresholds, in both measured magnitude and duration, that determine when to upload information. When the configured thresholds are met, the magnetometer in the smart parking sensor triggers an event to its corresponding LWPA LoRa communication module, via a simple microcontroller, to the LoRa base station, which then sends it to the smart parking server in the cloud. The smart parking server synthesizes event data received from multiple smart parking sensors. Since the smart parking server is aware of each specific deployment, a reliable algorithm can be implemented to accurately determine parking spot occupancy changes due to the new parking event.
Speaker
Speaker biography is not available.

Predicting Patient Hospital Admission for Triage with Machine Learning: An Analysis of Emergency Service Index Data

Rishi Mulchandani (Johns Hopkins University Applied Physics Laboratory, USA); Soma S Hebbar (Johns Hopkins University Applied Physics Laboratory (JHUAPL), USA); Jayant Maheshwari (Johns Hopkins University Applied Physics Lab (JHUAPL), USA)

0
Background:
In recent years, the use of artificial intelligence and machine learning techniques in healthcare has become increasingly crucial due to the vast amounts of data and the difficulty of manual analysis. The focus of this study is to use supervised machine learning models to accurately predict patient hospital admission based on Emergency Service Index (ESI) data. The ESI index classifies emergency room patients into five risk severity levels, with Level 1 being the most severe and Level 5 being the least severe. The importance of the ESI index lies in its ability to allocate resources efficiently and accurately in critical care situations.

Methods:
Using data from the National Hospital Ambulatory Medical Care Survey (NHAMCS), we developed a machine learning framework to predict admission and critical care outcomes in patients presenting to emergency departments. Our objective was to identify the socio-demographic and clinical factors associated with admission and critical care outcomes, and to achieve high accuracy in our predictions. By utilizing 76 numerical features and 9 categorical features from the NHAMCS dataset, we trained and validated our models using logistic regression (LR), random forest (RF), and XGBoost algorithms. The categorical features are one-hot encoded and combined with the numerical features to form the complete feature set. The data is then split into training, testing and validation sets using the train_test_split method with a 20% test size and a random number generation seed of 1234. Cross-validation is performed using StratifiedKFold with N_Folds set to 10.

Expected Results:
The study aims to achieve high accuracy in predicting patient hospital admission based on ESI data. The results of this study highlight the potential of machine learning in healthcare and the usefulness of XGBoost as the best performing model in this study.

Conclusion:
This study demonstrates the significance of using machine learning techniques in healthcare, particularly in the prediction of patient hospital admission. The results of this study show the potential of XGBoost as a powerful tool in healthcare and emphasize the importance of accurate patient classification during the triage process for the benefit of patients. Future studies could aim to expand the dataset and evaluate the models on a larger scale, as well as investigate the use of unsupervised learning techniques in healthcare prediction.
Speaker
Speaker biography is not available.

Wearable ultrasound devices for blood pressure measurement: a simulation study

King Ho Guo (UWC CSC Chang Shu College, Japan)

6
High blood pressure is closely linked with diseases such as stroke, heart disease, heart attack and in numbers stroke alone causes 5 million lives each year in the world yet another 5 million are permanently disabled. A study shows that more than 75% of the population that has the age over 70 are affected by blood pressure problem. Therefore, a real-time monitor is of importance for reducing serious diseases associated with high blood pressure.
Cliffs is commonly used for measuring blood pressure in hospital; however, it is challenging to use such a device for real-time monitoring. ECG is another method to measure blood pressure, however, the need to carry the ECG machine in 24 hours makes it unrealistic. In contrast, a light sensor that is more portable and real-time is highly desired for practical and daily use.

Wearable ultrasound devices have been studied to address this challenge. A recent study reported a design of an ultrasound array that can be easily wore and can measure the blood pressure by characterizing the distance between two blood vessels. Due to high portability and small size, this wearable ultrasound device can provide real-time, 24 hour monitoring of blood pressure. Since this device uses ultrasound penetration, the sensitivity of the device is very important to ensure the accuracy of measurement in deep tissue. The reported device employed a piezoelectric element array distributed in a 4 by 4 grid to generate and receive ultrasound and managed to measure blood pressure at a depth up to (find the value in literature) cm. Changing the distribution of the array is promising to further improve the sensitivity and hence the depth of use, however, fabricating a series of such devices is very resource-intensity.

In this work, we designed an ultrasound simulation program to mimic a wearable ultrasound device for optimising the array design. The ultrasound transmission and detection were achieved with K-wave, a MATLAB toolbox for mimicking ultrasound propagation in various media. Blood vessels, blood, and surrounding tissues were mimicked by setting different medium density and the speed of sound. The simulated devices use ultrasound to calculate the time taken to travel and bounce back between the two blood vessels. With the pre-knowledge of the speed that ultrasound travels in blood, the distance between two blood vessels was calculated and correspondingly, the blood pressure can be read out. We designed and tested different array shapes and distributions to maximize the signal-to-noise ratio of the ultrasound signal, which provided the highest sensitivity in blood pressure measurement. Limited by computing power, the current simulation is under a 2D model instead of a 3D model. In future works, 3D model will be setup and tested on a more powerful workstation to further test and improve the design of wearable ultrasound device. In future work, a wearable ultrasound device will be fabricated under the guidance of the simulation results, which could increase the effectiveness and the scope of the use of wearable ultrasound devices.
Speaker
Speaker biography is not available.

Engineering Kits to Prevent Summer Learning Loss

Anna R Rosner (Albemarle High School, USA)

2
Over the summer, many students forget what they learned in a previous year, resulting in a lower academic level when they start the next grade than when they left the previous. This is especially present in the transition from fifth to sixth grade, with 84% of students demonstrating learning loss according to MAP Growth Assessments (Kuhfeld, 2019). Additionally, most opportunities for students to engage with science and engineering are expensive camps, some of which additionally require daily transportation, making them even more inaccessible to working-class families. Early STEM education can both increase student performance and the likelihood that students will express interest in STEM degrees and careers later on. In order to make summer STEM education more accessible, this proposal entails a series of twelve engineering kits aimed towards rising sixth graders. Each kit consists of a paper bag, a postcard depicting an inspiring STEM figure, an interactive storybook containing five engineering challenges, all supplies needed to complete the engineering challenges, and discussion questions to be used after completion of the challenges. Each storybook depicts a different character facing challenges as they reach a goal, and students will solve the engineering challenges to help the character succeed. Each challenge is designed to take approximately an hour and to emphasize creativity and problem solving instead of simply following instructions. The kits will be distributed weekly to Boys & Girls Clubs located in the Charlottesville, Virginia area, with any surplus provided to the Jefferson-Madison Regional Libraries. These kits combine interactive story-telling and engaging problem solving to give students a valuable summer engineering experience and help prevent summer learning loss in reading. Additionally, the provided discussion questions will ensure that students are able to take away valuable skills and fully comprehend the content of the storybook.
Speaker
Speaker biography is not available.

Commercial truck parking conceptual design

Trung Q Tchiong (Upper Darby School District, USA); Nelson Dennis (Main Author, USA)

1
Develop an innovative approach to increase commercial truck parking that can accommodate up to 30 semi trucks and space requirements for commercial trucks that are generally 48 ft-53 ft long. It will also have restroom facilities, gas services, an on-site medical center, water drainage, and conventional energy security systems to protect all types of drivers-a green energy gadgets from promoting the echo system and reducing waste. Future work involves applying Autodesk Revit for restaurant design, Autodesk Fusion 360 for loading analysis (Autodesk.com), Autodesk Civil (https://www.autodesk.com/products/civil-3d/overview?us_oa=dotcom-us&us_si=8654128c-febf-4e89-abcf-836b1baa9492&us_st=Civil%203D&us_pt=Civil%203D&term=1-YEAR&tab=subscription&plc=CIV3D) to design the water and waste systems, and the NREL's PVWatts calculator to design and estimate the performance of the photovoltaic (PV) installations (https://pvwatts.nrel.gov/)
Speaker
Speaker biography is not available.

The Ethics and Privacy Risks of Artificial Intelligence in Education: Balancing the Benefits and Concerns with More AI

Cynthia C Zhang (Canada)

0
Artificial Intelligence (AI) is rapidly changing the way we live and work, with the potential to bring many benefits such as availability, digital assistance, labour assistance, and daily applications that render general functions more efficient. However, as technology becomes more pervasive, there is a growing concern about the impact it may have on privacy and ethics. AI systems are often designed to collect and process large amounts of data, which can raise significant privacy concerns if this data is misused or mishandled.

A subfield of AI, Educational Data Mining (EDM), specifically focuses on the application of AI to educational data. This refers to the process of using data mining analytics to interpret data from educational systems to improve student outcomes. EDM applies machine learning (ML), neural networks, and statistical methods to educational data to uncover patterns, trends and relationships. Specifically, ML algorithms are used to build predictive models based on educational data. For example, a decision tree algorithm can predict student exam performance based on factors such as prior grades, attendance, and demographic information. This, coupled with Neural networks- a type of ML algorithm inspired by the structure of the human brain used to model relationships such as student behaviour or demographic correlation - allows EDM to perform the following:

-Student performance prediction: Predict student performance on assessments and courses based on their demographics and learning behaviour.
-Adaptive learning: Personalizing the learning experience for individual students based on their performance and preferences.
-Student behaviour analysis: Understanding how students interact with educational technology and what factors influence learning.
-Early warning systems: Identifying at-risk students early on and providing targeted interventions.

However, there are several privacy and ethical risks associated with EDM:

-Data collection: EDM involves collecting large amounts of sensitive data.
-Data sharing: Sharing of educational data between different stakeholders, such as schools, government agencies, and companies, can increase the risk of data breaches and unauthorized access to information.
-Data security: Storing and managing large amounts of student data presents a risk of data breaches, hacking, and theft.
-Profiling and discrimination: EDM algorithms can be used to create profiles of students based on their data, which could lead to biased decisions and discrimination.
-Student rights: EDM may infringe on students' rights to control their own personal information.

This is only one of the many examples of AI systems and their moral implications, particularly if they are designed or used in ways that discriminate against certain groups of people. As a result, there is a growing need for a robust and comprehensive framework for privacy and ethics in AI which could address the various challenges and benefits posed by AI, as well as the need to provide guidance on how to build and utilize AI in a manner that considers both privacy and ethics. This document provides an overview of the key privacy and ethical issues in educational AI and discusses the possibility of a framework to address these challenges in order to balance AI's potential benefits and concerns.
Speaker
Speaker biography is not available.

A Novel Pre-Hospital Indoor Rescue Drone For Locating Cardiac Arrest Patients at Home Instantly and Delivering Emergency Medication Under Surveillance Before an EMS Arrives

Max Du (Canada)

0
Out-of-hospital cardiac arrest patients have 2 challenges. First, they need immediate rescue since the survival chances are close to zero within 10 minutes. However, the average EMS response time is 9 minutes or longer in US/Canada. Second, if they are home alone, only 4% survive and over half of them are unwitnessed.

In this project, a novel Pre-Hospital Indoor Rescue Drone is designed and constructed to solve the two challenges and save more lives by aiming to start rescue faster and to witness more patients, including those home alone, in the first critical minutes before EMS arrives. The drone system is designed for indoor use, like personal home drone standby. It has four design features: 1) auto-activate the drone and locate the patient instantly after receiving wireless alert from the patient; 2) live-video surveil the patient with an EMS; 3) deliver patient's prescribed emergency medication under surveillance; 4) open room doors if necessary. A drone homebase is designed to enable auto-activation of the drone and keep it on power standby 24/7, by using ESP32 C3 M5 Stamp wireless communication, and a pulley mechanism driven by a stepper motor. A web server is created for EMS to activate remote surveillance and phone calls with the patient. Through a low-cost Android phone mounted on the drone with a screen mirroring app, first responders can know the patient's situation and specific position in the house instantly. An auto injector is designed which consists of a modified linear lift system and an intramuscular needleless injector; a web server is created for EMS to remote control the speed and direction of motors on the auto injector; and a medicine pill box is designed and attached on the top of the drone beside the auto injector. A door-mounted servo system is designed to open a room door through a wirelessly controlled gripper.

The current prototype of the Pre-Hospital Indoor Rescue Drone is constructed, automated, and operational. As tested in a residential setting to simulate a real scenario for a random patient, the design is valid for 1) drone auto activation and approaching patients instantly, as tested 55 seconds flying upstairs; 2) patient live video surveillance and phone call communication from 11km away; 3) delivering emergency medication smoothly with a pillbox and intramuscular auto-injector 3-4 centimeters close to patients; 4) room door opening through remote control. This innovation is the first indoor prehospital rescue drone to help save cardiac arrest patients, which can be affordably integrated into the existing EMS rescue process to help survival chances, shorten recovery time, and reduce healthcare costs.
Speaker
Speaker biography is not available.

Wireless Networked Motion Planning Control for a QBOT2

Saami Ali (Cold Spring Harbor High School, USA)

0
This work investigates the trajectory tracking motion control problem for a QBOT-2 using wireless network communication with a delay. QBOT2 is an autonomous wheeled mobile differential drive ground robot designed by Quanser. The QBot2 kinematics are modeled by a nonlinear kinematic mathematical model. This work introduces a control methodology that deals with the perturbations added into the system model due to communication though a wireless channel. The system thus becomes a wireless control system (WCS) and the perturbations can be instrumental in interfering with the feedback signal and in causing errors in the system response. The work proposes a control methodology to eliminate these errors with the main focus on the uncertain time-varying delays that are inherent in wireless communication links. This work proposes a modification in the mathematical model that incorporates the delay in the feedback signal. The first step is to linearize the kinematic state-space model about a desired trajectory. Second, the model is converted from continuous-time to discrete-time and delay is incorporated in the discrete model. The effect of the delay is studied on the closed-loop behavior. Third, an observer based discrete state feedback control is designed to meet the desired trajectory. The controller is simulated in MATLAB and QUARC simulation environments. Finally, the effectiveness of the wireless controller is experimentally validated on the QBOT2 hardware.
Speaker
Speaker biography is not available.

The "Rock Candy Approach for Lithium Extraction"

Qixiang Feng (USA); Zhiyong Ren (Princeton University, USA); Qiang Chen (Princeton International School of Math and Science, USA)

0
To achieve Sustainable Development Goals, the use of electric vehicles (EV) is being increasingly encouraged and many states have come up with no-fuel vehicle deadlines in coming decades. However, lithium-ion, a critical mineral in EV batteries, is still facing a huge gap between production and consumption especially compared to the surge in need for EV batteries in the coming decades. Under the International Energy Agency's most ambitious climate scenario, lithium supply will have to grow fortyfold by 2040 from today's levels. Even today there's already a capacity gap of about 10000 tons of lithium carbonate equivalent. Lithium is an important strategic resource but currently, the US only has a limited capacity for it. The only active lithium mine in the US is the Silverpeak in Nevada, which only contributes less than 1% of global lithium production, becoming a major barrier and national security issue. The current approach to lithium extraction mostly relies on sunlight and geothermal evaporation of seawater or brine, but these methods are generally energy inefficient and ineffective in separating lithium from other ions in the brine. The issue here really encouraged me to start this project about the "rock candy approach" for lithium extraction. My method uses the difference in solubility and mobility of different ions to separate lithium from other ions and the goal is to find the most efficient fabric structure to carry on this process.
Speaker
Speaker biography is not available.

Beauty or the Beast: Understanding the Durability of Nail Polishes

Anwita Wadekar (St. Bernadette School, USA)

0
Plasticizers are chemicals that are added to nail polishes to increase their durability so that they last long, and do not chip or fade away easily. In this project, I designed experiments to confirm the hypothesis that a nail polish that contains a greater number of plasticizers has higher durability.

I chose three nail polishes from Sally Hansen; Xtreme Wear which contains two plasticizers, Complete Manicure which contains one plasticizer, and Good Kind Pure Vegan which has zero plasticizers. I painted five fake nails with each of these three nail polishes and attached them to fake hands. I then put these fake hands through rough, moderate, and light-use conditions. The rough use experiment was rubbing sandpaper against each nail and counting the number of rubs until the nail polish started to chip. The moderate use experiment mimicked dishwashing. I put dish soap into a bucket of water and rubbed a sponge gently across the nails and measured the time it took for the nail polish to fade. The last experiment was the light use experiment which simulated handwashing. I put hand soap and water into a bucket and moved the hand around while tracking the time taken for the nail polish to fade. I found that the Xtreme Wear nail polish takes longer to fade and chip compared to the Complete Manicure, which takes longer than the Pure Vegan nail polish.

I then studied the harmful effects of the two plasticizers, Ethyl Tosylamide and Triphenyl Phosphate, which are used in the Xtreme Wear nail polish. Using the Skin Deep Database from the Environmental Working Group I found that Ethyl Tosylamide is not extremely toxic but it still can affect the endocrine and hormonal system causing cancers and birth defects along with some allergic reactions. Triphenyl Phosphate also known as TPHP, is more toxic than Ethyl Tosylamide. It causes reproductive issues and a couple of animal studies revealed neurodevelopmental effects with small doses. Some human case studies showed disruption to the endocrine system too. It is used in the manufacturing of plastics and as a flame retardant. TPHP is also an environmental toxin. When nail polishes are thrown away the remaining polish from the bottles can diffuse into the soil and water, coming into contact with other species and disrupting their bodies.

Through this project, I have learned that many cosmetics and personal care products can contain chemicals that can cause short-term and long-term health problems. I plan to use this knowledge to raise awareness amongst my friends and in my community about toxic chemicals found in everyday cosmetics. I would also like to advocate for a law that prevents the use of such toxic chemicals in cosmetics. In California, legislation was signed to ban toxic chemicals in cosmetics back in 2020. 24 toxic chemicals were banned and California was the first state to stop using these perilous ingredients. I hope to do the same in Massachusetts and inform the community how toxic and harmful some cosmetic products can be.
Speaker
Speaker biography is not available.

Federated Learning with Prioritized Data Sample Selection

Rebekah Wang (West Windsor-Plainsboro High School South, USA)

1
Machine Learning (ML) performs many tasks such as predictive text and ad recommendation. Its success lies in the training dataset used to create effective models. However, valuable training dataset is not always readily available due to data privacy concerns. To ensure data privacy when training ML models, federated learning was developed. In federated learning, an initial global model is distributed to clients (e.g., mobile devices) from a server. Each client independently trains the model with their own data, and only needs to send model updates back to the server. The server then aggregates these updates and creates a new global model. This process is repeated until the global model converges and a final global model is produced. Federated learning enables data to stay on each client device, maintaining data privacy.

However, when using federated learning to predict user preferences, not all data samples are equally useful. For example, when predicting what videos a user might want to watch next, a user's more recent watch history would be more useful than a user's older watch history. In this case, data samples with a small age could be more useful, where the age of a data sample is defined as the amount of time that has passed since the data sample was generated. Training the model with a higher number of useful data samples would allow the model to make more relevant and accurate predictions. Additionally, when aggregating model updates from the clients, a weighted average should be used, where a heavier weight is given to a model that was trained with more useful data.

Thus, a new federated learning approach with priority-based data sample selection and weighted model aggregation is proposed. Priority-based data sample selection works as follows: when the client devices train their local models, each device should deploy a data sample selection process that prioritizes useful data. The essential idea is to give useful data samples a higher priority or probability, while maintaining the randomness in selected data samples for each training round. Then, when the server uses priority-based weighted model aggregation, the local models from special clients (clients that used a higher percentage of useful data samples during training) will be assigned a heavier weight. This way, the global model will make more relevant predictions as it is more influenced by useful data.

To assess the proposed approach, federated learning using the proposed approach was compared to benchmark federated learning (i.e., FedAvg). After each training round, the accuracy of each global model was tested to graph a global model convergence chart. Three trials were conducted, each with different weights. In the results, as the weights for special clients increased, the global model accuracy increased faster and more dramatically. Even when all the clients had the same weights, the global model still achieved a higher accuracy than the benchmark's. There are a few areas for further study to improve the effectiveness of this approach. For example, the server could adaptively assign weights for each client independently.
Speaker
Speaker biography is not available.

Study on Projects of Natural Restoration of Rivers in Korea and Other Countries

Sahng-Won Lee (Seoul International School, Korea (South)); Richard Kyung (CRG-NJ, USA)

0
Natural environment restoration of the rivers surrounding our living spaces is performed all over the world. This field combines principles from ecology, hydrology, and engineering to develop and implement strategies for restoring damaged rivers and their ecosystems.
The goals of river restoration include improving water quality, restoring habitat for native species, and enhancing recreational opportunities. Effective river restoration requires careful planning, monitoring, and collaboration among various stakeholders, including local communities, government agencies, and scientific experts.
This study addresses recent river restoration projects in progress both internationally and in Korea and introduces relevant study cases and reviews. Since river restoration is a complex subject that involves more than simply environmental protection, the perspectives of communities living near and sometimes dependent on a river are considered and discussed in the presented study.
The natural sciences and engineering may effectively resolve the technical issues, but cooperation with experts in the social sciences and humanities is required to achieve lasting solutions.
Speaker
Speaker biography is not available.

Modeling atmospheric ablation of iron meteors undergoing thermal decomposition

Jonathan Wu (Applied Physics Lab, USA)

0
Threatening the world mass extinction, meteors enter Earth at high velocities and degrade at some rate. With computational chemical and fluid dynamic models, an estimated rate can predict the progression of a falling meteor. Given known conditions of an altitude, assumed conditions of a hypothetical meteor, and assumed steady-state conditions of the boundary layer, two equations modelling the flow of energy and mass in and out of the system can be formulated, namely the surface energy balance (SEB) and surface mass balance (SMB) equation. Due to the difficulty of calculating the surface mass balance equation, thermodynamic calculations of the surface energy balance equation as a function of temperature is more feasible, with the assumption that the Lewis number is one-when Stanton's number for thermal diffusivity equals to Schmidt's number for mass diffusivity. By solving the SEB equation using software such as Cantera and the CFD program, the value of the outwards diffusive flux as a result of high-enthalpy reactions can be computed for each point-system of the meteor. With spherical integration, the average outwards diffusive mass flux and predicted burn-up time is determined.
Speaker
Speaker biography is not available.

Study on the Electron Carriers in the Active Layers to Improve Photocurrent in Polymer Solar Cells

Geonwoo Bae (Choate Rosemary Hall, USA); Richard Kyung (CRG-NJ, USA)

0
Polymer-based solar cells are a type of photovoltaic (PV) technology that uses organic materials, such as small organic molecules, as the photoactive layer for converting light into electricity. These cells have the potential to be used in a wide range of applications, including portable electronic devices, building-integrated photovoltaics, and large-scale renewable energy systems. They have several advantages over conventional silicon-based solar cells, including low cost, lightweight, and flexibility. However, their performance is lower compared to traditional silicon-based solar cells, and they have a shorter lifespan. Despite their limitations, the development of organic solar cells is an active area of research, and significant progress has been made in recent years to improve their efficiency, stability, and longevity.

In this paper, the active layer of the cell which contains an electron-rich material and an electron-deficient material was theoretically and computationally studied to enhance the efficiency of conduction in the unit. The properties of the polymers in the photoactive layers, such as the optimized energy, electron distributions, bandgap energy, and electron mobility, were found or discussed to determine the efficiency of the unit.

The objectives of this research are to develop new potential materials to improve the performance of the photoactive layer and increase the overall efficiency of solar cells.
Speaker
Speaker biography is not available.

Study on Hospitability Industry Trends and Changing Demands

Keonha Bae (Choate Rosemary Hall, USA); Richard Kyung (CRG-NJ, USA)

0
A study on hospitality can include an analysis of various aspects of the hospitality industry, including hotels, restaurants, and tourist destinations.
The insights gained from hospitality research can inform decision-making and improve overall business performance by conducting surveys, focus groups, market analysis, and other methods. In the hospitality industry, there are a few factors to consider for successful management. Personalization: offering customized experiences to guests, such as tailored recommendations and services.
Technology: using technology to improve guest experiences, such as mobile check-in, smart room technology, and virtual assistants. Wellness: providing guests with health and wellness experiences, such as fitness centers, healthy food options, and spa services. Lastly, unique and experiential accommodations are important since they offer unusual or distinctive accommodation options, such as treehouses, yurts, and tiny homes.

In this paper, these factors and trends are studied to shape the future of the hospitality industry so that many hotels and resorts can incorporate them into their operations to stay competitive and meet the changing demands of guests.
Speaker
Speaker biography is not available.

The Advanced & Automated Pill Tracking & Dispensing System

Archishma Marrapu (Thomas Jefferson High School for Science and Technology in Northern Virginia)

0
Background: Prescription drugs are used on a daily basis by over one hundred thirty one million people in the United States of America, 80% of which have claimed to skip a dose at some point in their lives. Importance: Medication adherence can prevent intensified medical conditions and over 125,000 deaths per year. Purpose: The purpose of this study was to create an automated mediation tracker and dispenser to reduce the human errors that can impact someone's life in a negative manner. Methods: The current prototype utilizes microelectronic technology, such as the Arduino Board and servo motors, and uses Android Studios to provide the user interface. Results: This prototype has a high accuracy rate of 91.32% for the pill tracking component and for the pill dispensing component. Conclusion: Although there are products in the current market that target this problem in our society, they lack the accuracy, fast speed, and convenience of this product. This product allows the user to track the pills they are consuming, store their pills, receive reminders about pills when the user must take them, notify them about skipped doses, upcoming refills, or upcoming appointments, dispense the right number of pills at the right time, and even connect to another account to either track them as a caregiver or be tracked as a care recipient.
Speaker
Speaker biography is not available.

Session Chair

Weihsing Wang (PRISMS)

View Recording

Made with in Toronto · Privacy Policy · IEEE ISEC 2020 · IEEE ISEC 2021 · IEEE ISEC 2022 · © 2023 Duetone Corp.